id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2206.01338
Yuhan Helena Liu
Yuhan Helena Liu, Stephen Smith, Stefan Mihalas, Eric Shea-Brown, and Uygar S\"umb\"ul
Biologically-plausible backpropagation through arbitrary timespans via local neuromodulators
NeurIPS 2022 Camera Ready
null
null
null
q-bio.NC cs.NE
http://creativecommons.org/licenses/by-nc-sa/4.0/
The spectacular successes of recurrent neural network models where key parameters are adjusted via backpropagation-based gradient descent have inspired much thought as to how biological neuronal networks might solve the corresponding synaptic credit assignment problem. There is so far little agreement, however, as to how biological networks could implement the necessary backpropagation through time, given widely recognized constraints of biological synaptic network signaling architectures. Here, we propose that extra-synaptic diffusion of local neuromodulators such as neuropeptides may afford an effective mode of backpropagation lying within the bounds of biological plausibility. Going beyond existing temporal truncation-based gradient approximations, our approximate gradient-based update rule, ModProp, propagates credit information through arbitrary time steps. ModProp suggests that modulatory signals can act on receiving cells by convolving their eligibility traces via causal, time-invariant and synapse-type-specific filter taps. Our mathematical analysis of ModProp learning, together with simulation results on benchmark temporal tasks, demonstrate the advantage of ModProp over existing biologically-plausible temporal credit assignment rules. These results suggest a potential neuronal mechanism for signaling credit information related to recurrent interactions over a longer time horizon. Finally, we derive an in-silico implementation of ModProp that could serve as a low-complexity and causal alternative to backpropagation through time.
[ { "created": "Thu, 2 Jun 2022 23:38:10 GMT", "version": "v1" }, { "created": "Wed, 12 Oct 2022 14:57:20 GMT", "version": "v2" }, { "created": "Mon, 7 Nov 2022 16:52:24 GMT", "version": "v3" }, { "created": "Sat, 14 Jan 2023 00:58:51 GMT", "version": "v4" } ]
2023-01-18
[ [ "Liu", "Yuhan Helena", "" ], [ "Smith", "Stephen", "" ], [ "Mihalas", "Stefan", "" ], [ "Shea-Brown", "Eric", "" ], [ "Sümbül", "Uygar", "" ] ]
The spectacular successes of recurrent neural network models where key parameters are adjusted via backpropagation-based gradient descent have inspired much thought as to how biological neuronal networks might solve the corresponding synaptic credit assignment problem. There is so far little agreement, however, as to how biological networks could implement the necessary backpropagation through time, given widely recognized constraints of biological synaptic network signaling architectures. Here, we propose that extra-synaptic diffusion of local neuromodulators such as neuropeptides may afford an effective mode of backpropagation lying within the bounds of biological plausibility. Going beyond existing temporal truncation-based gradient approximations, our approximate gradient-based update rule, ModProp, propagates credit information through arbitrary time steps. ModProp suggests that modulatory signals can act on receiving cells by convolving their eligibility traces via causal, time-invariant and synapse-type-specific filter taps. Our mathematical analysis of ModProp learning, together with simulation results on benchmark temporal tasks, demonstrate the advantage of ModProp over existing biologically-plausible temporal credit assignment rules. These results suggest a potential neuronal mechanism for signaling credit information related to recurrent interactions over a longer time horizon. Finally, we derive an in-silico implementation of ModProp that could serve as a low-complexity and causal alternative to backpropagation through time.
2405.06659
ZhengYang Qi
Qi Zhengyang, Liu Zijing, Zhang Jiying, Cao He, Li Yu
ControlMol: Adding Substruture Control To Molecule Diffusion Models
9 pages,7 figures
null
null
null
q-bio.BM cs.AI cs.LG physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Designing new molecules is an important task in the field of pharmaceuticals. Due to the vast design space of molecules, generating molecules conditioned on a specific sub-structure relevant to a particular function or therapeutic target is a crucial task in computer-aided drug design. In this paper, we present ControlMol, which adds sub-structure control to molecule generation with diffusion models. Unlike previous methods which view this task as inpainting or conditional generation, we adopt the idea of ControlNet into conditional molecule generation and make adaptive adjustments to a pre-trained diffusion model. We apply our method to both 2D and 3D molecule generation tasks. Conditioned on randomly partitioned sub-structure data, our method outperforms previous methods by generating more valid and diverse molecules. The method is easy to implement and can be quickly applied to a variety of pre-trained molecule generation models.
[ { "created": "Mon, 22 Apr 2024 14:36:19 GMT", "version": "v1" } ]
2024-05-14
[ [ "Zhengyang", "Qi", "" ], [ "Zijing", "Liu", "" ], [ "Jiying", "Zhang", "" ], [ "He", "Cao", "" ], [ "Yu", "Li", "" ] ]
Designing new molecules is an important task in the field of pharmaceuticals. Due to the vast design space of molecules, generating molecules conditioned on a specific sub-structure relevant to a particular function or therapeutic target is a crucial task in computer-aided drug design. In this paper, we present ControlMol, which adds sub-structure control to molecule generation with diffusion models. Unlike previous methods which view this task as inpainting or conditional generation, we adopt the idea of ControlNet into conditional molecule generation and make adaptive adjustments to a pre-trained diffusion model. We apply our method to both 2D and 3D molecule generation tasks. Conditioned on randomly partitioned sub-structure data, our method outperforms previous methods by generating more valid and diverse molecules. The method is easy to implement and can be quickly applied to a variety of pre-trained molecule generation models.
1601.02822
Chris Brackley
Chris A Brackley, Jill M Brown, Dominic Waithe, Christian Babbs, James Davies, Jim R Hughes, Veronica J Buckle and Davide Marenduzzo
Predicting the three-dimensional folding of cis-regulatory regions in mammalian genomes using bioinformatic data and polymer models
null
null
null
null
q-bio.QM physics.bio-ph q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The three-dimensional organisation of chromosomes can be probed using methods such as Capture-C. However it is unclear how such population level data relates to the organisation within a single cell, and the mechanisms leading to the observed interactions are still largely obscure. We present a polymer modelling scheme based on the assumption that chromosome architecture is maintained by protein bridges which form chromatin loops. To test the model we perform FISH experiments and also compare with Capture-C data. Starting merely from the locations of protein binding sites, our model accurately predicts the experimentally observed chromatin interactions, revealing a population of 3D conformations.
[ { "created": "Tue, 12 Jan 2016 12:09:09 GMT", "version": "v1" } ]
2016-01-13
[ [ "Brackley", "Chris A", "" ], [ "Brown", "Jill M", "" ], [ "Waithe", "Dominic", "" ], [ "Babbs", "Christian", "" ], [ "Davies", "James", "" ], [ "Hughes", "Jim R", "" ], [ "Buckle", "Veronica J", "" ], [...
The three-dimensional organisation of chromosomes can be probed using methods such as Capture-C. However it is unclear how such population level data relates to the organisation within a single cell, and the mechanisms leading to the observed interactions are still largely obscure. We present a polymer modelling scheme based on the assumption that chromosome architecture is maintained by protein bridges which form chromatin loops. To test the model we perform FISH experiments and also compare with Capture-C data. Starting merely from the locations of protein binding sites, our model accurately predicts the experimentally observed chromatin interactions, revealing a population of 3D conformations.
1305.0062
Zeinab Taghavi
Zeinab Taghavi, Narjes S. Movahedi, Sorin Draghici, Hamidreza Chitsaz
Distilled Single Cell Genome Sequencing and De Novo Assembly for Sparse Microbial Communities
null
null
10.1093/bioinformatics/btt420
null
q-bio.GN cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identification of every single genome present in a microbial sample is an important and challenging task with crucial applications. It is challenging because there are typically millions of cells in a microbial sample, the vast majority of which elude cultivation. The most accurate method to date is exhaustive single cell sequencing using multiple displacement amplification, which is simply intractable for a large number of cells. However, there is hope for breaking this barrier as the number of different cell types with distinct genome sequences is usually much smaller than the number of cells. Here, we present a novel divide and conquer method to sequence and de novo assemble all distinct genomes present in a microbial sample with a sequencing cost and computational complexity proportional to the number of genome types, rather than the number of cells. The method is implemented in a tool called Squeezambler. We evaluated Squeezambler on simulated data. The proposed divide and conquer method successfully reduces the cost of sequencing in comparison with the naive exhaustive approach. Availability: Squeezambler and datasets are available under http://compbio.cs.wayne.edu/software/squeezambler/.
[ { "created": "Wed, 1 May 2013 00:49:29 GMT", "version": "v1" }, { "created": "Wed, 22 May 2013 21:39:04 GMT", "version": "v2" } ]
2014-04-29
[ [ "Taghavi", "Zeinab", "" ], [ "Movahedi", "Narjes S.", "" ], [ "Draghici", "Sorin", "" ], [ "Chitsaz", "Hamidreza", "" ] ]
Identification of every single genome present in a microbial sample is an important and challenging task with crucial applications. It is challenging because there are typically millions of cells in a microbial sample, the vast majority of which elude cultivation. The most accurate method to date is exhaustive single cell sequencing using multiple displacement amplification, which is simply intractable for a large number of cells. However, there is hope for breaking this barrier as the number of different cell types with distinct genome sequences is usually much smaller than the number of cells. Here, we present a novel divide and conquer method to sequence and de novo assemble all distinct genomes present in a microbial sample with a sequencing cost and computational complexity proportional to the number of genome types, rather than the number of cells. The method is implemented in a tool called Squeezambler. We evaluated Squeezambler on simulated data. The proposed divide and conquer method successfully reduces the cost of sequencing in comparison with the naive exhaustive approach. Availability: Squeezambler and datasets are available under http://compbio.cs.wayne.edu/software/squeezambler/.
2101.05748
Francois Treussart
Sandra Claveau (LuMIn, METSY, IGR), Marek Kindermann (IOCB / CAS, UCT Prague), Alexandre Papine, Zamira D\'iaz-Riascos (VHIR, CIBER-BBN), Xavier Delen, Patrick Georges, Roser L\'opez-Alemany (IDIBELL), \`Oscar Tirado Mart\'inez (IDIBELL), Jean-R\'emi Bertrand (METSY, IGR), Ibane Abasolo Olaortua (VHIR, CIBER-BBN), Petr Cigler (IOCB / CAS), Fran\c{c}ois Treussart (LuMIn)
Harnessing subcellular-resolved organ distribution of cationic copolymer-functionalized fluorescent nanodiamonds for optimal delivery of therapeutic siRNA to a xenografted tumor in mice
Nanoscale, Royal Society of Chemistry, 2021
null
10.1039/d1nr00146a
null
q-bio.TO q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diamond nanoparticles (nanodiamonds) can transport active drugs in cultured cells as well as in vivo. However, in the latter case, methods allowing to determine their bioavailability accurately are still lacking. Nanodiamond can be made fluorescent with a perfectly stable emission and a lifetime ten times longer than the one of tissue autofluorescence. Taking advantage of these properties, we present an automated quantification method of fluorescent nanodiamonds (FND) in histological sections of mouse organs and tumor, after systemic injection. We use a home-made time-delayed fluorescence microscope comprising a custom pulsed laser source synchronized on the master clock of a gated intensified array detector. This setup allows to obtain ultra-high-resolution images 120 Mpixels of whole mouse organs sections, with subcellular resolution and single-particle sensitivity. As a proof-of-principle experiment, we quantified the biodistribution and aggregation state of new cationic FNDs able to transport small interfering RNA inhibiting the oncogene responsible for Ewing sarcoma. Image analysis showed a low yield of nanodiamonds in the tumor after intravenous injection. Thus, for the in vivo efficacy assay we injected the nanomedicine into the tumor. We achieved a 28-fold inhibition of the oncogene. This method can readily be applied to other nanoemitters with $\approx$100 ns lifetime.
[ { "created": "Tue, 29 Dec 2020 09:15:16 GMT", "version": "v1" }, { "created": "Tue, 1 Jun 2021 09:57:55 GMT", "version": "v2" } ]
2021-06-02
[ [ "Claveau", "Sandra", "", "LuMIn, METSY, IGR" ], [ "Kindermann", "Marek", "", "IOCB / CAS, UCT\n Prague" ], [ "Papine", "Alexandre", "", "VHIR, CIBER-BBN" ], [ "Díaz-Riascos", "Zamira", "", "VHIR, CIBER-BBN" ], [ "Delen", "Xav...
Diamond nanoparticles (nanodiamonds) can transport active drugs in cultured cells as well as in vivo. However, in the latter case, methods allowing to determine their bioavailability accurately are still lacking. Nanodiamond can be made fluorescent with a perfectly stable emission and a lifetime ten times longer than the one of tissue autofluorescence. Taking advantage of these properties, we present an automated quantification method of fluorescent nanodiamonds (FND) in histological sections of mouse organs and tumor, after systemic injection. We use a home-made time-delayed fluorescence microscope comprising a custom pulsed laser source synchronized on the master clock of a gated intensified array detector. This setup allows to obtain ultra-high-resolution images 120 Mpixels of whole mouse organs sections, with subcellular resolution and single-particle sensitivity. As a proof-of-principle experiment, we quantified the biodistribution and aggregation state of new cationic FNDs able to transport small interfering RNA inhibiting the oncogene responsible for Ewing sarcoma. Image analysis showed a low yield of nanodiamonds in the tumor after intravenous injection. Thus, for the in vivo efficacy assay we injected the nanomedicine into the tumor. We achieved a 28-fold inhibition of the oncogene. This method can readily be applied to other nanoemitters with $\approx$100 ns lifetime.
q-bio/0511040
Denis Boyer
Gabriel Ramos-Fern\'andez, Denis Boyer, Vian P. G\'omez
A complex social structure with fission-fusion properties can emerge from a simple foraging model
6 figures; minor revisions
null
null
null
q-bio.PE cond-mat.dis-nn
null
Precisely how ecological factors influence animal social structure is far from clear. We explore this question using an agent-based model inspired by the fission-fusion society of Spider monkeys (Ateles spp). Our model introduces a realistic, complex foraging environment composed of many resource patches with size varying according to an inverse power-law frequency distribution with exponent $\beta$. Foragers do not interact among them and start from random initial locations. They have either a complete or partial knowledge of the environment and maximize the ratio between the size of the next visited patch and the distance traveled to it, ignoring previously visited patches. At intermediate values of $\beta$, when large patches are neither too scarce nor too abundant, foragers form groups (coincide at the same patch) with a similar size frequency distribution as the spider monkey's subgroups. Fission-fusion events create a network of associations that contains weak bonds among foragers that meet only rarely and strong bonds among those that repeat associations more frequently than would be expected by chance. The latter form sub-networks with the highest number of bonds and a high clustering coefficient at intermediate values of $\beta$. The weak bonds enable the whole social network to percolate. Some of our results are similar to those found in long-term field studies of spider monkeys and other fission-fusion species. We conclude that hypotheses about the ecological causes of fission-fusion and the origin of complex social structures should consider the heterogeneity and complexity of the environment in which social animals live.
[ { "created": "Thu, 24 Nov 2005 22:15:28 GMT", "version": "v1" }, { "created": "Tue, 28 Feb 2006 16:46:33 GMT", "version": "v2" }, { "created": "Thu, 4 May 2006 15:03:58 GMT", "version": "v3" } ]
2007-05-23
[ [ "Ramos-Fernández", "Gabriel", "" ], [ "Boyer", "Denis", "" ], [ "Gómez", "Vian P.", "" ] ]
Precisely how ecological factors influence animal social structure is far from clear. We explore this question using an agent-based model inspired by the fission-fusion society of Spider monkeys (Ateles spp). Our model introduces a realistic, complex foraging environment composed of many resource patches with size varying according to an inverse power-law frequency distribution with exponent $\beta$. Foragers do not interact among them and start from random initial locations. They have either a complete or partial knowledge of the environment and maximize the ratio between the size of the next visited patch and the distance traveled to it, ignoring previously visited patches. At intermediate values of $\beta$, when large patches are neither too scarce nor too abundant, foragers form groups (coincide at the same patch) with a similar size frequency distribution as the spider monkey's subgroups. Fission-fusion events create a network of associations that contains weak bonds among foragers that meet only rarely and strong bonds among those that repeat associations more frequently than would be expected by chance. The latter form sub-networks with the highest number of bonds and a high clustering coefficient at intermediate values of $\beta$. The weak bonds enable the whole social network to percolate. Some of our results are similar to those found in long-term field studies of spider monkeys and other fission-fusion species. We conclude that hypotheses about the ecological causes of fission-fusion and the origin of complex social structures should consider the heterogeneity and complexity of the environment in which social animals live.
1710.10326
Joseph Rusinko
Jacqueline Kane, Joseph Rusinko, Katherine Thompson
Phylogenetic Derivative: A Tool for Assessing Local Tree Reconstruction in the Presence of Recombination
null
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, much attention has been given to understanding recombination events along a chromosome in a variety of field. For instance, many population genetics problems are limited by the inaccuracy of inferred evolutionary histories of chromosomes sampled randomly from a population. This evolutionary history differs among genomic locations as an artifact of recombination events along a chromosome. Thus, much recent attention has been focused on identifying these recombination points. However, many proposed methods either make simplifying, but unrealistic, assumptions about recombination along a chromosome, or are unable to scale to large genome-wide data like what has become commonplace in statistical genetics. Here, we introduce a \emph{phylogenetic derivative} to describe the relatedness of neighboring trees along a chromosome. This phylogenetic derivative is a computationally efficient, flexible metric that can be also be used assess the prevalence of recombination across a chromosome. These proposed methods are tested and perform well in analyzing both simulated data and a real mouse data set.
[ { "created": "Fri, 27 Oct 2017 20:45:18 GMT", "version": "v1" } ]
2017-10-31
[ [ "Kane", "Jacqueline", "" ], [ "Rusinko", "Joseph", "" ], [ "Thompson", "Katherine", "" ] ]
Recently, much attention has been given to understanding recombination events along a chromosome in a variety of field. For instance, many population genetics problems are limited by the inaccuracy of inferred evolutionary histories of chromosomes sampled randomly from a population. This evolutionary history differs among genomic locations as an artifact of recombination events along a chromosome. Thus, much recent attention has been focused on identifying these recombination points. However, many proposed methods either make simplifying, but unrealistic, assumptions about recombination along a chromosome, or are unable to scale to large genome-wide data like what has become commonplace in statistical genetics. Here, we introduce a \emph{phylogenetic derivative} to describe the relatedness of neighboring trees along a chromosome. This phylogenetic derivative is a computationally efficient, flexible metric that can be also be used assess the prevalence of recombination across a chromosome. These proposed methods are tested and perform well in analyzing both simulated data and a real mouse data set.
2010.10402
Kerui Peng
Kerui Peng, Yana Safonova, Mikhail Shugay, Alice Popejoy, Oscar Rodriguez, Felix Breden, Petter Brodin, Amanda M. Burkhardt, Carlos Bustamante, Van-Mai Cao-Lormeau, Martin M. Corcoran, Darragh Duffy, Macarena Fuentes Guajardo, Ricardo Fujita, Victor Greiff, Vanessa D. Jonsson, Xiao Liu, Lluis Quintana-Murci, Maura Rossetti, Jianming Xie, Gur Yaari, Wei Zhang, Malak S. Abedalthagafi, Khalid O. Adekoya, Rahaman A. Ahmed, Wei-Chiao Chang, Clive Gray, Yusuke Nakamura, William D. Lees, Purvesh Khatri, Houda Alachkar, Cathrine Scheepers, Corey T. Watson, Gunilla B. Karlsson Hedestam, Serghei Mangul
Diversity in immunogenomics: the value and the challenge
22 pages,1 table
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the advent of high-throughput sequencing technologies, the fields of immunogenomics and adaptive immune receptor repertoire research are facing both opportunities and challenges. Adaptive immune receptor repertoire sequencing (AIRR-seq) has become an increasingly important tool to characterize T and B cell responses in settings of interest. However, the majority of AIRR-seq studies conducted so far were performed in individuals of European ancestry, restricting the ability to identify variation in human adaptive immune responses across populations and limiting their applications. As AIRR-seq studies depend on the ability to assign VDJ sequence reads to the correct germline gene segments, efforts to characterize the genomic loci that encode adaptive immune receptor genes in different populations are urgently needed. The availability of comprehensive germline gene databases and further applications of AIRR-seq studies to individuals of non-European ancestry will substantially enhance our understanding of human adaptive immune responses, promote the development of effective diagnostics and treatments, and eventually advance precision medicine.
[ { "created": "Tue, 20 Oct 2020 16:09:20 GMT", "version": "v1" }, { "created": "Wed, 21 Oct 2020 02:48:08 GMT", "version": "v2" }, { "created": "Thu, 22 Oct 2020 02:32:40 GMT", "version": "v3" }, { "created": "Mon, 1 Mar 2021 22:34:17 GMT", "version": "v4" } ]
2021-03-03
[ [ "Peng", "Kerui", "" ], [ "Safonova", "Yana", "" ], [ "Shugay", "Mikhail", "" ], [ "Popejoy", "Alice", "" ], [ "Rodriguez", "Oscar", "" ], [ "Breden", "Felix", "" ], [ "Brodin", "Petter", "" ], [ "Bu...
With the advent of high-throughput sequencing technologies, the fields of immunogenomics and adaptive immune receptor repertoire research are facing both opportunities and challenges. Adaptive immune receptor repertoire sequencing (AIRR-seq) has become an increasingly important tool to characterize T and B cell responses in settings of interest. However, the majority of AIRR-seq studies conducted so far were performed in individuals of European ancestry, restricting the ability to identify variation in human adaptive immune responses across populations and limiting their applications. As AIRR-seq studies depend on the ability to assign VDJ sequence reads to the correct germline gene segments, efforts to characterize the genomic loci that encode adaptive immune receptor genes in different populations are urgently needed. The availability of comprehensive germline gene databases and further applications of AIRR-seq studies to individuals of non-European ancestry will substantially enhance our understanding of human adaptive immune responses, promote the development of effective diagnostics and treatments, and eventually advance precision medicine.
2307.06234
Joshua Macdonald
Joshua C. Macdonald, Hayriye Gulbudak
Forward hysteresis and Hopf bifurcation in an NPZD model with application to harmful algal blooms
40 pages, 10 primary figures, 5 supplementary figures, 1 supplementary table
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-nd/4.0/
Nutrient-Phytoplankton-Zooplankton-Detritus (NPZD) models, describing the interactions between phytoplankton, zooplankton systems, and their ecosystem, are used to predict their ecological and evolutionary population dynamics. These organisms form the base two trophic levels of aquatic ecosystems. Hence understanding their population dynamics and how disturbances can affect these systems is crucial. Here, starting from a base NPZ modeling framework, we incorporate the harmful effects of phytoplankton overpopulation on zooplankton - representing a crucial next step in harmful algal bloom (HAB) modeling - and split the nutrient compartment to formulate an NPZD model. We then mathematically analyze the NPZ system upon which this new model is based, including local and global stability of equilibria, Hopf bifurcation condition, and forward hysteresis, where the bi-stability occurs with multiple attractors. Finally, we extend the threshold analysis to the NPZD model, which displays both forward hysteresis with bi-stability and Hopf bifurcation under different parameter regimes, and examine ecological implications after incorporating seasonality and ecological disturbances. Ultimately, we quantify ecosystem health in terms of the relative values of the robust persistence thresholds for phytoplankton and zooplankton and find (i) ecosystems sufficiently favoring phytoplankton, as quantified by the relative values of the plankton persistence numbers, are vulnerable to both HABs and (local) zooplankton extinction (ii) even healthy ecosystems are extremely sensitive to nutrient depletion over relatively short time scales.
[ { "created": "Wed, 12 Jul 2023 15:26:28 GMT", "version": "v1" } ]
2023-07-13
[ [ "Macdonald", "Joshua C.", "" ], [ "Gulbudak", "Hayriye", "" ] ]
Nutrient-Phytoplankton-Zooplankton-Detritus (NPZD) models, describing the interactions between phytoplankton, zooplankton systems, and their ecosystem, are used to predict their ecological and evolutionary population dynamics. These organisms form the base two trophic levels of aquatic ecosystems. Hence understanding their population dynamics and how disturbances can affect these systems is crucial. Here, starting from a base NPZ modeling framework, we incorporate the harmful effects of phytoplankton overpopulation on zooplankton - representing a crucial next step in harmful algal bloom (HAB) modeling - and split the nutrient compartment to formulate an NPZD model. We then mathematically analyze the NPZ system upon which this new model is based, including local and global stability of equilibria, Hopf bifurcation condition, and forward hysteresis, where the bi-stability occurs with multiple attractors. Finally, we extend the threshold analysis to the NPZD model, which displays both forward hysteresis with bi-stability and Hopf bifurcation under different parameter regimes, and examine ecological implications after incorporating seasonality and ecological disturbances. Ultimately, we quantify ecosystem health in terms of the relative values of the robust persistence thresholds for phytoplankton and zooplankton and find (i) ecosystems sufficiently favoring phytoplankton, as quantified by the relative values of the plankton persistence numbers, are vulnerable to both HABs and (local) zooplankton extinction (ii) even healthy ecosystems are extremely sensitive to nutrient depletion over relatively short time scales.
1308.6004
John Maloney
John M. Maloney, Eric Lehnhardt, Alexandra F. Long, and Krystyn J. Van Vliet
Mechanical fluidity of fully suspended biological cells
null
null
10.1016/j.bpj.2013.08.040
null
q-bio.CB cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mechanical characteristics of single biological cells are used to identify and possibly leverage interesting differences among cells or cell populations. Fluidity---hysteresivity normalized to the extremes of an elastic solid or a viscous liquid---can be extracted from, and compared among, multiple rheological measurements of cells: creep compliance vs. time, complex modulus vs. frequency, and phase lag vs. frequency. With multiple strategies available for acquisition of this nondimensional property, fluidity may serve as a useful and robust parameter for distinguishing cell populations, and for understanding the physical origins of deformability in soft matter. Here, for three disparate eukaryotic cell types deformed in the suspended state via optical stretching, we examine the dependence of fluidity on chemical and environmental influences around a time scale of 1 s. We find that fluidity estimates are consistent in the time and the frequency domains under a structural damping (power-law or fractional derivative)model, but not under an equivalent-complexity lumpedcomponent (spring-dashpot) model; the latter predicts spurious time constants. Although fluidity is suppressed by chemical crosslinking, we find that adenosine triphosphate (ATP) depletion in the cell does not measurably alter the parameter, and thus conclude that active ATP-driven events are not a crucial enabler of fluidity during linear viscoelastic deformation of a suspended cell. Finally, by using the capacity of optical stretching to produce near-instantaneous increases in cell temperature, we establish that fluidity increases with temperature---now measured in a fully suspended, sortable cell without the complicating factor of cell-substratum adhesion.
[ { "created": "Tue, 27 Aug 2013 22:15:04 GMT", "version": "v1" } ]
2013-10-22
[ [ "Maloney", "John M.", "" ], [ "Lehnhardt", "Eric", "" ], [ "Long", "Alexandra F.", "" ], [ "Van Vliet", "Krystyn J.", "" ] ]
Mechanical characteristics of single biological cells are used to identify and possibly leverage interesting differences among cells or cell populations. Fluidity---hysteresivity normalized to the extremes of an elastic solid or a viscous liquid---can be extracted from, and compared among, multiple rheological measurements of cells: creep compliance vs. time, complex modulus vs. frequency, and phase lag vs. frequency. With multiple strategies available for acquisition of this nondimensional property, fluidity may serve as a useful and robust parameter for distinguishing cell populations, and for understanding the physical origins of deformability in soft matter. Here, for three disparate eukaryotic cell types deformed in the suspended state via optical stretching, we examine the dependence of fluidity on chemical and environmental influences around a time scale of 1 s. We find that fluidity estimates are consistent in the time and the frequency domains under a structural damping (power-law or fractional derivative)model, but not under an equivalent-complexity lumpedcomponent (spring-dashpot) model; the latter predicts spurious time constants. Although fluidity is suppressed by chemical crosslinking, we find that adenosine triphosphate (ATP) depletion in the cell does not measurably alter the parameter, and thus conclude that active ATP-driven events are not a crucial enabler of fluidity during linear viscoelastic deformation of a suspended cell. Finally, by using the capacity of optical stretching to produce near-instantaneous increases in cell temperature, we establish that fluidity increases with temperature---now measured in a fully suspended, sortable cell without the complicating factor of cell-substratum adhesion.
1911.02345
Benoit Viollet
Benoit Viollet (EMD)
The Energy Sensor AMPK: Adaptations to Exercise, Nutritional and Hormonal Signals
null
Spiegelman B. Hormones, Metabolism and the Benefits of Exercise, Hormones, Metabolism and the Benefits of Exercise, Springer, pp.13-24, 2018, Research and Perspectives in Endocrine Interactions, 978-3-319-72789-9
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To sustain metabolism, intracellular ATP concentration must be regulated within an appropriate range. This coordination is achieved through the function of the AMP-activated protein kinase (AMPK), a cellular "fuel gauge" that is expressed in essentially all eukaryotic cells as heterotrimeric complexes containing catalytic $\alpha$ subunits and regulatory $\beta$ and $\gamma$ subunits. When cellular energy status has been compromised, AMPK is activated by increases in AMP:ATP or ADP:ATP ratios and acts to restore energy homeostasis by stimulating energy production via catabolic pathways while decreasing non-essential energy-consuming pathways. Although the primary function of AMPK is to regulate energy homeostasis at a cell-autonomous level, in multicellular organisms, the AMPK system has evolved to interact with hormones to regulate energy intake and expenditure at the whole body level. Thus, AMPK functions as a signaling hub, coordinating anabolic and catabolic pathways to balance nutrient supply with energy demand at both the cellular and whole-body levels. AMPK is activated by various metabolic stresses such as ischemia or hypoxia or glucose deprivation and has both acute and long-term effects on metabolic pathways and key cellular functions. In addition, AMPK appears to be a major sensor of energy demand in exercising muscle and acts both as a multitask gatekeeper and an energy regulator in skeletal muscle. Acute activation of AMPK has been shown to promote glucose transport and fatty acid oxidation while suppressing glycogen synthase activity and protein synthesis. Chronic activation of AMPK induces a shift in muscle fiber type composition, reduces markers of muscle degeneration and enhances muscle oxidative capacity potentially by stimulating mitochondrial biogenesis. Furthermore, recent evidence demonstrates that AMPK may not only regulate metabolism during exercise but also in the recovery phase. AMPK acts as a molecular transducer between exercise and insulin signaling and is necessary for the ability of prior contraction/exercise to increase muscle insulin sensitivity. Based on these observations, drugs that activate AMPK might be expected to be useful in the treatment of metabolic disorders and insulin resistance in various conditions.
[ { "created": "Wed, 6 Nov 2019 12:44:25 GMT", "version": "v1" } ]
2019-11-07
[ [ "Viollet", "Benoit", "", "EMD" ] ]
To sustain metabolism, intracellular ATP concentration must be regulated within an appropriate range. This coordination is achieved through the function of the AMP-activated protein kinase (AMPK), a cellular "fuel gauge" that is expressed in essentially all eukaryotic cells as heterotrimeric complexes containing catalytic $\alpha$ subunits and regulatory $\beta$ and $\gamma$ subunits. When cellular energy status has been compromised, AMPK is activated by increases in AMP:ATP or ADP:ATP ratios and acts to restore energy homeostasis by stimulating energy production via catabolic pathways while decreasing non-essential energy-consuming pathways. Although the primary function of AMPK is to regulate energy homeostasis at a cell-autonomous level, in multicellular organisms, the AMPK system has evolved to interact with hormones to regulate energy intake and expenditure at the whole body level. Thus, AMPK functions as a signaling hub, coordinating anabolic and catabolic pathways to balance nutrient supply with energy demand at both the cellular and whole-body levels. AMPK is activated by various metabolic stresses such as ischemia or hypoxia or glucose deprivation and has both acute and long-term effects on metabolic pathways and key cellular functions. In addition, AMPK appears to be a major sensor of energy demand in exercising muscle and acts both as a multitask gatekeeper and an energy regulator in skeletal muscle. Acute activation of AMPK has been shown to promote glucose transport and fatty acid oxidation while suppressing glycogen synthase activity and protein synthesis. Chronic activation of AMPK induces a shift in muscle fiber type composition, reduces markers of muscle degeneration and enhances muscle oxidative capacity potentially by stimulating mitochondrial biogenesis. Furthermore, recent evidence demonstrates that AMPK may not only regulate metabolism during exercise but also in the recovery phase. AMPK acts as a molecular transducer between exercise and insulin signaling and is necessary for the ability of prior contraction/exercise to increase muscle insulin sensitivity. Based on these observations, drugs that activate AMPK might be expected to be useful in the treatment of metabolic disorders and insulin resistance in various conditions.
q-bio/0507020
Georgy Karev
Georgy P. Karev, Yuri I. Wolf, Eugene V. Koonin
Simple stochastic birth and death models of genome evolution: Was there enough time for us to evolve?
25 pages, 9 figures, 4 Tables
Bioinformatics 2003, 19(15):1889-1900
null
null
q-bio.GN q-bio.PE
null
We show that simple stochastic models of genome evolution lead to power law asymptotics of protein domain family size distribution. These models, called Birth, Death and Innovation Models (BDIM), represent a special class of balanced birth-and-death processes, in which domain duplication and deletion rates are asymptotically equal up to the second order. The simplest, linear BDIM shows an excellent fit to the observed distributions of domain family size in diverse prokaryotic and eukaryotic genomes. However, the stochastic version of the linear BDIM explored here predicts that the actual size of large paralogous families is reached on an unrealistically long timescale. We show that introduction of non-linearity, which might be interpreted as interaction of a particular order between individual family members, allows the model to achieve genome evolution rates that are much better compatible with the current estimates of the rates of individual duplication/loss events.
[ { "created": "Wed, 13 Jul 2005 18:51:32 GMT", "version": "v1" } ]
2007-05-23
[ [ "Karev", "Georgy P.", "" ], [ "Wolf", "Yuri I.", "" ], [ "Koonin", "Eugene V.", "" ] ]
We show that simple stochastic models of genome evolution lead to power law asymptotics of protein domain family size distribution. These models, called Birth, Death and Innovation Models (BDIM), represent a special class of balanced birth-and-death processes, in which domain duplication and deletion rates are asymptotically equal up to the second order. The simplest, linear BDIM shows an excellent fit to the observed distributions of domain family size in diverse prokaryotic and eukaryotic genomes. However, the stochastic version of the linear BDIM explored here predicts that the actual size of large paralogous families is reached on an unrealistically long timescale. We show that introduction of non-linearity, which might be interpreted as interaction of a particular order between individual family members, allows the model to achieve genome evolution rates that are much better compatible with the current estimates of the rates of individual duplication/loss events.
0705.0374
Razvan Radulescu M.D.
Razvan T. Radulescu, Angelika Jahn, Daniela Hellmann and Gregor Weirich
Immunohistochemical pitfalls in the demonstration of insulin-degrading enzyme in normal and neoplastic human tissues
17 pages, 6 figures
null
null
null
q-bio.TO q-bio.QM
null
Previously, we have identified the cytoplasmic zinc metalloprotease insulin-degrading enzyme(IDE) in human tissues by an immunohistochemical method involving no antigen retrieval (AR) by pressure cooking to avoid artifacts by endogenous biotin exposure and a detection kit based on the labeled streptavidin biotin (LSAB) method. Thereby, we also employed 3% hydrogen peroxide(H2O2) for the inhibition of endogenous peroxidase activity and incubated the tissue sections with the biotinylated secondary antibody at room temperature (RT). We now add the immunohistochemical details that had led us to this optimized procedure as they also bear a more general relevance when demonstrating intracellular tissue antigens. Our most important result is that endogenous peroxidase inhibition by 0.3% H2O2 coincided with an apparently positive IDE staining in an investigated breast cancer specimen whereas combining a block by 3% H2O2 with an incubation of the biotinylated secondary antibody at RT, yet not at 37 degrees Celsius, revealed this specimen as almost entirely IDE-negative. Our present data caution against three different immunohistochemical pitfalls that might cause falsely positive results and artifacts when using an LSAB- and peroxidase-based detection method: pressure cooking for AR, insufficient quenching of endogenous peroxidases and heating of tissue sections while incubating with biotinylated secondary antibodies.
[ { "created": "Wed, 2 May 2007 20:56:59 GMT", "version": "v1" } ]
2007-05-23
[ [ "Radulescu", "Razvan T.", "" ], [ "Jahn", "Angelika", "" ], [ "Hellmann", "Daniela", "" ], [ "Weirich", "Gregor", "" ] ]
Previously, we have identified the cytoplasmic zinc metalloprotease insulin-degrading enzyme(IDE) in human tissues by an immunohistochemical method involving no antigen retrieval (AR) by pressure cooking to avoid artifacts by endogenous biotin exposure and a detection kit based on the labeled streptavidin biotin (LSAB) method. Thereby, we also employed 3% hydrogen peroxide(H2O2) for the inhibition of endogenous peroxidase activity and incubated the tissue sections with the biotinylated secondary antibody at room temperature (RT). We now add the immunohistochemical details that had led us to this optimized procedure as they also bear a more general relevance when demonstrating intracellular tissue antigens. Our most important result is that endogenous peroxidase inhibition by 0.3% H2O2 coincided with an apparently positive IDE staining in an investigated breast cancer specimen whereas combining a block by 3% H2O2 with an incubation of the biotinylated secondary antibody at RT, yet not at 37 degrees Celsius, revealed this specimen as almost entirely IDE-negative. Our present data caution against three different immunohistochemical pitfalls that might cause falsely positive results and artifacts when using an LSAB- and peroxidase-based detection method: pressure cooking for AR, insufficient quenching of endogenous peroxidases and heating of tissue sections while incubating with biotinylated secondary antibodies.
1203.2313
Webb Sprague
W. Webb Sprague
Automatic parametrization of age/ sex Leslie matrices for human populations
null
null
null
null
q-bio.PE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a technique for parameterizing Leslie transition matrices from simple age and sex population counts, using an implementation of "Wood's Method" [wood]; these matrices can forecast population by age and sex (the "cohort component" method) using simple matrix multiplication and a starting population. Our approach improves on previous methods for creating Leslie matrices in two respects: it eliminates the need to calculate input demographic rates from "raw" data, and our new format for the Leslie matrix more elegantly reveals the population's demographic components of change (fertility, mortality, and migration). The paper is organized around three main themes. First, we describe the underlying algorithm, "Wood's Method," which uses quadratic optimization to fit a transition matrix to age and sex population counts. Second, we use demographic theory to create constraint sets that make the algorithm useable for human populations. Finally, we use the method to forecast 3,120 US counties and show that it holds promise for automating cohort-component forecasts. This paper describes the first published successful application of Wood's method to human populations; it also points to more general promise of constrained optimization techniques in demographic modeling.
[ { "created": "Sun, 11 Mar 2012 04:59:36 GMT", "version": "v1" }, { "created": "Thu, 15 Mar 2012 04:20:28 GMT", "version": "v2" }, { "created": "Thu, 29 Mar 2012 03:15:00 GMT", "version": "v3" }, { "created": "Sun, 22 Apr 2012 05:39:17 GMT", "version": "v4" } ]
2012-04-24
[ [ "Sprague", "W. Webb", "" ] ]
In this paper, we present a technique for parameterizing Leslie transition matrices from simple age and sex population counts, using an implementation of "Wood's Method" [wood]; these matrices can forecast population by age and sex (the "cohort component" method) using simple matrix multiplication and a starting population. Our approach improves on previous methods for creating Leslie matrices in two respects: it eliminates the need to calculate input demographic rates from "raw" data, and our new format for the Leslie matrix more elegantly reveals the population's demographic components of change (fertility, mortality, and migration). The paper is organized around three main themes. First, we describe the underlying algorithm, "Wood's Method," which uses quadratic optimization to fit a transition matrix to age and sex population counts. Second, we use demographic theory to create constraint sets that make the algorithm useable for human populations. Finally, we use the method to forecast 3,120 US counties and show that it holds promise for automating cohort-component forecasts. This paper describes the first published successful application of Wood's method to human populations; it also points to more general promise of constrained optimization techniques in demographic modeling.
0707.3671
Natalia Kudryavtseva
N. N. Kudryavtseva, D. F. Avgustinovich, N. P. Bondar, M. V. Tenditnik, I. L. Kovalenko, L. A. Koryakina
New method for the study of psychotropic drug effects under simulated clinical conditions
15 pages, 9 figures, 5 tables
null
null
null
q-bio.OT q-bio.QM
null
The sensory contact model allows forming different psychopathological states (anxious depression, catalepsy, social withdrawal, pathological aggression, hypersensitivity, cognition disturbances, anhedonia, alcoholism etc.) produced by repeated agonistic interactions in male mice and investigating the therapeutic and preventive properties of any drug as well as its efficiency under simulated clinical conditions. This approach can be useful for a better understanding of the drugs' action in different stages of disease development in individuals. It is suggested that this pharmacological approach may be applied for the screening of different novel psychotropic drugs.
[ { "created": "Wed, 25 Jul 2007 05:56:18 GMT", "version": "v1" }, { "created": "Wed, 21 May 2008 10:03:51 GMT", "version": "v2" } ]
2008-05-21
[ [ "Kudryavtseva", "N. N.", "" ], [ "Avgustinovich", "D. F.", "" ], [ "Bondar", "N. P.", "" ], [ "Tenditnik", "M. V.", "" ], [ "Kovalenko", "I. L.", "" ], [ "Koryakina", "L. A.", "" ] ]
The sensory contact model allows forming different psychopathological states (anxious depression, catalepsy, social withdrawal, pathological aggression, hypersensitivity, cognition disturbances, anhedonia, alcoholism etc.) produced by repeated agonistic interactions in male mice and investigating the therapeutic and preventive properties of any drug as well as its efficiency under simulated clinical conditions. This approach can be useful for a better understanding of the drugs' action in different stages of disease development in individuals. It is suggested that this pharmacological approach may be applied for the screening of different novel psychotropic drugs.
2111.13073
Bryan M. Li
Bryan M. Li, Theoklitos Amvrosiadis, Nathalie Rochefort, Arno Onken
Neuronal Learning Analysis using Cycle-Consistent Adversarial Networks
null
null
null
null
q-bio.NC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding how activity in neural circuits reshapes following task learning could reveal fundamental mechanisms of learning. Thanks to the recent advances in neural imaging technologies, high-quality recordings can be obtained from hundreds of neurons over multiple days or even weeks. However, the complexity and dimensionality of population responses pose significant challenges for analysis. Existing methods of studying neuronal adaptation and learning often impose strong assumptions on the data or model, resulting in biased descriptions that do not generalize. In this work, we use a variant of deep generative models called - CycleGAN, to learn the unknown mapping between pre- and post-learning neural activities recorded $\textit{in vivo}$. We develop an end-to-end pipeline to preprocess, train and evaluate calcium fluorescence signals, and a procedure to interpret the resulting deep learning models. To assess the validity of our method, we first test our framework on a synthetic dataset with known ground-truth transformation. Subsequently, we applied our method to neural activities recorded from the primary visual cortex of behaving mice, where the mice transition from novice to expert-level performance in a visual-based virtual reality experiment. We evaluate model performance on generated calcium signals and their inferred spike trains. To maximize performance, we derive a novel approach to pre-sort neurons such that convolutional-based networks can take advantage of the spatial information that exists in neural activities. In addition, we incorporate visual explanation methods to improve the interpretability of our work and gain insights into the learning process as manifested in the cellular activities. Together, our results demonstrate that analyzing neuronal learning processes with data-driven deep unsupervised methods holds the potential to unravel changes in an unbiased way.
[ { "created": "Thu, 25 Nov 2021 13:24:19 GMT", "version": "v1" } ]
2021-11-29
[ [ "Li", "Bryan M.", "" ], [ "Amvrosiadis", "Theoklitos", "" ], [ "Rochefort", "Nathalie", "" ], [ "Onken", "Arno", "" ] ]
Understanding how activity in neural circuits reshapes following task learning could reveal fundamental mechanisms of learning. Thanks to the recent advances in neural imaging technologies, high-quality recordings can be obtained from hundreds of neurons over multiple days or even weeks. However, the complexity and dimensionality of population responses pose significant challenges for analysis. Existing methods of studying neuronal adaptation and learning often impose strong assumptions on the data or model, resulting in biased descriptions that do not generalize. In this work, we use a variant of deep generative models called - CycleGAN, to learn the unknown mapping between pre- and post-learning neural activities recorded $\textit{in vivo}$. We develop an end-to-end pipeline to preprocess, train and evaluate calcium fluorescence signals, and a procedure to interpret the resulting deep learning models. To assess the validity of our method, we first test our framework on a synthetic dataset with known ground-truth transformation. Subsequently, we applied our method to neural activities recorded from the primary visual cortex of behaving mice, where the mice transition from novice to expert-level performance in a visual-based virtual reality experiment. We evaluate model performance on generated calcium signals and their inferred spike trains. To maximize performance, we derive a novel approach to pre-sort neurons such that convolutional-based networks can take advantage of the spatial information that exists in neural activities. In addition, we incorporate visual explanation methods to improve the interpretability of our work and gain insights into the learning process as manifested in the cellular activities. Together, our results demonstrate that analyzing neuronal learning processes with data-driven deep unsupervised methods holds the potential to unravel changes in an unbiased way.
0710.3988
Dietrich Stauffer
D. Stauffer and S. Cebrat
Sexual reproduction from the male (men) point of view
20 pages with many figures, draft for Windwer Summer School proceedings
null
null
null
q-bio.PE
null
To counterbalance the views presented here by Suzana Moss de Oliveira, we explain here the truth: How men are oppressed by Mother Nature, who may have made an error inventing us, and by living women, who could get rid of most of us. Why do women live longer than us? Why is the Y chromosome for men so small? What are the dangers of marital fidelity? In an appendix we mention the demographic challenges of the future with many old and few young people.
[ { "created": "Mon, 22 Oct 2007 07:56:28 GMT", "version": "v1" } ]
2007-10-23
[ [ "Stauffer", "D.", "" ], [ "Cebrat", "S.", "" ] ]
To counterbalance the views presented here by Suzana Moss de Oliveira, we explain here the truth: How men are oppressed by Mother Nature, who may have made an error inventing us, and by living women, who could get rid of most of us. Why do women live longer than us? Why is the Y chromosome for men so small? What are the dangers of marital fidelity? In an appendix we mention the demographic challenges of the future with many old and few young people.
2405.08735
Tyler Meadows
Stacey R. Smith?, Tyler Meadows and Gail S.K. Wolkowicz
Competition in the nutrient-driven self-cycling fermentation process
17 pages, 2 figures
null
null
null
q-bio.PE math.DS
http://creativecommons.org/licenses/by/4.0/
Self-cycling fermentation is an automated process used for culturing microorganisms. We consider a model of $n$ distinct species competing for a single non-reproducing nutrient in a self-cycling fermentor in which the nutrient level is used as the decanting condition. The model is formulated in terms of impulsive ordinary differential equations. We prove that two species are able to coexist in the fermentor under certain conditions. We also provide numerical simulations that suggest coexistence of three species is possible and that competitor-mediated coexistence can occur in this case. These results are in contrast to the chemostat, the continuous analogue, where multiple species cannot coexist on a single nonreproducing nutrient.
[ { "created": "Tue, 14 May 2024 16:21:58 GMT", "version": "v1" } ]
2024-05-15
[ [ "Smith?", "Stacey R.", "" ], [ "Meadows", "Tyler", "" ], [ "Wolkowicz", "Gail S. K.", "" ] ]
Self-cycling fermentation is an automated process used for culturing microorganisms. We consider a model of $n$ distinct species competing for a single non-reproducing nutrient in a self-cycling fermentor in which the nutrient level is used as the decanting condition. The model is formulated in terms of impulsive ordinary differential equations. We prove that two species are able to coexist in the fermentor under certain conditions. We also provide numerical simulations that suggest coexistence of three species is possible and that competitor-mediated coexistence can occur in this case. These results are in contrast to the chemostat, the continuous analogue, where multiple species cannot coexist on a single nonreproducing nutrient.
1801.07244
Marius Nann
M. Nann, L. G. Cohen, L. Deecke, S. R. Soekadar
To jump or not to jump: The Bereitschaftspotential required to jump into 192-meter abyss
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Self-initiated voluntary acts, such as pressing a button, are preceded by a negative electrical brain potential, the Bereitschaftspotential (BP), that can be recorded over the human scalp using electroencephalography (EEG). Up to now, the BP required to initiate voluntary acts has only been recorded under well-controlled laboratory conditions. It is thus not known if this form of brain activity also underlies motor initiation in possible life-threatening decision making, such as jumping into a 192-meter abyss, an act requiring extraordinary willpower. Here, we report BP before self-initiated 192-meter extreme bungee jumping across two semi-professional cliff divers (both male, mean age 19.3 years). We found that the spatiotemporal dynamics of the BP is comparable to that recorded under laboratory conditions. These results, possible through recent advancements in wireless and portable EEG technology, document for the first time pre-movement brain activity preceding possible life-threatening decision making.
[ { "created": "Mon, 22 Jan 2018 18:59:37 GMT", "version": "v1" } ]
2018-01-23
[ [ "Nann", "M.", "" ], [ "Cohen", "L. G.", "" ], [ "Deecke", "L.", "" ], [ "Soekadar", "S. R.", "" ] ]
Self-initiated voluntary acts, such as pressing a button, are preceded by a negative electrical brain potential, the Bereitschaftspotential (BP), that can be recorded over the human scalp using electroencephalography (EEG). Up to now, the BP required to initiate voluntary acts has only been recorded under well-controlled laboratory conditions. It is thus not known if this form of brain activity also underlies motor initiation in possible life-threatening decision making, such as jumping into a 192-meter abyss, an act requiring extraordinary willpower. Here, we report BP before self-initiated 192-meter extreme bungee jumping across two semi-professional cliff divers (both male, mean age 19.3 years). We found that the spatiotemporal dynamics of the BP is comparable to that recorded under laboratory conditions. These results, possible through recent advancements in wireless and portable EEG technology, document for the first time pre-movement brain activity preceding possible life-threatening decision making.
2203.11266
Tasneem Jivanjee
Tasneem Jivanjee, Samira Ibrahim, Sarah K. Nyquist, G. James Gatter, Joshua D. Bromley, Swati Jaiswal, Bonnie Berger, Samuel M. Behar, J. Christopher Love, Alex K. Shalek
Enriching and Characterizing T-Cell Repertoires from 3' Barcoded Single-Cell Whole Transcriptome Amplification Products
57 pages excluding supplementary figures, 7 figures, 8 tables, 5 supplementary figures
null
null
null
q-bio.GN q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Antigen-specific T cells play an essential role in immunoregulation and diseases such as cancer. Characterizing the T cell receptor (TCR) sequences that encode T cell specificity is critical for elucidating the antigenic determinants of immunological diseases and designing therapeutic remedies. However, methods of obtaining single-cell TCR sequencing data are labor and cost intensive, requiring cell sorting and full length single-cell RNA-sequencing (scRNA-seq). New high-throughput 3' cell-barcoding scRNA-seq methods can simplify and scale this process; however, they do not routinely capture TCR sequences during library preparation and sequencing. While 5' cell-barcoding scRNA-seq methods can be used to examine TCR repertoire at single-cell resolution, it requires specialized reagents which cannot be applied to samples previously processed using 3' cell-barcoding methods. Here, we outline a method for sequencing TCR$\alpha$ and TCR$\beta$ transcripts from samples already processed using 3' cell-barcoding scRNA-seq platforms, ensuring TCR recovery at a single-cell resolution. In short, a fraction of the 3' barcoded whole transcriptome amplification (WTA) product typically used to generate a massively parallel 3' scRNA-seq library is enriched for TCR transcripts using biotinylated probes, and further amplified using the same universal primer sequence from WTA. Primer extension using TCR V-region primers and targeted PCR amplification results in a 3' barcoded single-cell CDR3-enriched library that can be sequenced with custom sequencing primers. Coupled with 3' scRNA-seq of the same WTA, this method enables simultaneous analysis of single-cell transcriptomes and TCR sequences which can help interpret inherent heterogeneity among antigen-specific T cells and salient disease biology. This method can be adapted to enrich other transcripts of interest from 3' and 5' barcoded WTA libraries.
[ { "created": "Mon, 21 Mar 2022 18:49:30 GMT", "version": "v1" } ]
2022-03-23
[ [ "Jivanjee", "Tasneem", "" ], [ "Ibrahim", "Samira", "" ], [ "Nyquist", "Sarah K.", "" ], [ "Gatter", "G. James", "" ], [ "Bromley", "Joshua D.", "" ], [ "Jaiswal", "Swati", "" ], [ "Berger", "Bonnie", "" ...
Antigen-specific T cells play an essential role in immunoregulation and diseases such as cancer. Characterizing the T cell receptor (TCR) sequences that encode T cell specificity is critical for elucidating the antigenic determinants of immunological diseases and designing therapeutic remedies. However, methods of obtaining single-cell TCR sequencing data are labor and cost intensive, requiring cell sorting and full length single-cell RNA-sequencing (scRNA-seq). New high-throughput 3' cell-barcoding scRNA-seq methods can simplify and scale this process; however, they do not routinely capture TCR sequences during library preparation and sequencing. While 5' cell-barcoding scRNA-seq methods can be used to examine TCR repertoire at single-cell resolution, it requires specialized reagents which cannot be applied to samples previously processed using 3' cell-barcoding methods. Here, we outline a method for sequencing TCR$\alpha$ and TCR$\beta$ transcripts from samples already processed using 3' cell-barcoding scRNA-seq platforms, ensuring TCR recovery at a single-cell resolution. In short, a fraction of the 3' barcoded whole transcriptome amplification (WTA) product typically used to generate a massively parallel 3' scRNA-seq library is enriched for TCR transcripts using biotinylated probes, and further amplified using the same universal primer sequence from WTA. Primer extension using TCR V-region primers and targeted PCR amplification results in a 3' barcoded single-cell CDR3-enriched library that can be sequenced with custom sequencing primers. Coupled with 3' scRNA-seq of the same WTA, this method enables simultaneous analysis of single-cell transcriptomes and TCR sequences which can help interpret inherent heterogeneity among antigen-specific T cells and salient disease biology. This method can be adapted to enrich other transcripts of interest from 3' and 5' barcoded WTA libraries.
2206.09693
Jiayu Shang
Jiayu Shang and Xubo Tang and Yanni Sun
PhaTYP: Predicting the lifestyle for bacteriophages using BERT
16 pages, 11 figures
Briefings in Bioinformatics, November 2022
10.1093/bib/bbac487
null
q-bio.GN
http://creativecommons.org/licenses/by-nc-nd/4.0/
Bacteriophages (or phages), which infect bacteria, have two distinct lifestyles: virulent and temperate. Predicting the lifestyle of phages helps decipher their interactions with their bacterial hosts, aiding phages' applications in fields such as phage therapy. Because experimental methods for annotating the lifestyle of phages cannot keep pace with the fast accumulation of sequenced phages, computational method for predicting phages' lifestyles has become an attractive alternative. Despite some promising results, computational lifestyle prediction remains difficult because of the limited known annotations and the sheer amount of sequenced phage contigs assembled from metagenomic data. In particular, most of the existing tools cannot precisely predict phages' lifestyles for short contigs. In this work, we develop PhaTYP (Phage TYPe prediction tool) to improve the accuracy of lifestyle prediction on short contigs. We design two different training tasks, self-supervised and fine-tuning tasks, to overcome lifestyle prediction difficulties. We rigorously tested and compared PhaTYP with four state-of-the-art methods: DeePhage, PHACTS, PhagePred, and BACPHLIP. The experimental results show that PhaTYP outperforms all these methods and achieves more stable performance on short contigs. In addition, we demonstrated the utility of PhaTYP for analyzing the phage lifestyle on human neonates' gut data. This application shows that PhaTYP is a useful means for studying phages in metagenomic data and helps extend our understanding of microbial communities.
[ { "created": "Mon, 20 Jun 2022 10:24:45 GMT", "version": "v1" } ]
2023-01-02
[ [ "Shang", "Jiayu", "" ], [ "Tang", "Xubo", "" ], [ "Sun", "Yanni", "" ] ]
Bacteriophages (or phages), which infect bacteria, have two distinct lifestyles: virulent and temperate. Predicting the lifestyle of phages helps decipher their interactions with their bacterial hosts, aiding phages' applications in fields such as phage therapy. Because experimental methods for annotating the lifestyle of phages cannot keep pace with the fast accumulation of sequenced phages, computational method for predicting phages' lifestyles has become an attractive alternative. Despite some promising results, computational lifestyle prediction remains difficult because of the limited known annotations and the sheer amount of sequenced phage contigs assembled from metagenomic data. In particular, most of the existing tools cannot precisely predict phages' lifestyles for short contigs. In this work, we develop PhaTYP (Phage TYPe prediction tool) to improve the accuracy of lifestyle prediction on short contigs. We design two different training tasks, self-supervised and fine-tuning tasks, to overcome lifestyle prediction difficulties. We rigorously tested and compared PhaTYP with four state-of-the-art methods: DeePhage, PHACTS, PhagePred, and BACPHLIP. The experimental results show that PhaTYP outperforms all these methods and achieves more stable performance on short contigs. In addition, we demonstrated the utility of PhaTYP for analyzing the phage lifestyle on human neonates' gut data. This application shows that PhaTYP is a useful means for studying phages in metagenomic data and helps extend our understanding of microbial communities.
1106.1620
Michal Komorowski
Michal Komorowski, Jacek Miekisz, Michael P.H. Stumpf
Decomposing Noise in Biochemical Signalling Systems Highlights the Role of Protein Degradation
null
null
null
null
q-bio.QM q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The phenomena of stochasticity in biochemical processes have been intriguing life scientists for the past few decades. We now know that living cells take advantage of stochasticity in some cases and counteract stochastic effects in others. The source of intrinsic stochasticity in biomolecular systems are random timings of individual reactions, which cumulatively drive the variability in outputs of such systems. Despite the acknowledged relevance of stochasticity in the functioning of living cells no rigorous method have been proposed to precisely identify sources of variability. In this paper we propose a novel methodology that allows us to calculate contributions of individual reactions into the variability of a system's output. We demonstrate that some reactions have dramatically different effects on noise than others. Surprisingly, in the class of open conversion systems that serve as an approximate model of signal transduction, the degradation of an output contributes half of the total noise. We also demonstrate the importance of degradation in other relevant systems and propose a degradation feedback control mechanism that has the capability of an effective noise suppression. Application of our method to some well studied biochemical systems such as: gene expression, Michaelis-Menten enzyme kinetics, and the p53 system indicates that our methodology reveals an unprecedented insight into the origins of variability in biochemical systems. For many systems an analytical decomposition is not available; therefore the method has been implemented as a Matlab package and is available from the authors upon request.
[ { "created": "Wed, 8 Jun 2011 18:53:38 GMT", "version": "v1" } ]
2011-06-09
[ [ "Komorowski", "Michal", "" ], [ "Miekisz", "Jacek", "" ], [ "Stumpf", "Michael P. H.", "" ] ]
The phenomena of stochasticity in biochemical processes have been intriguing life scientists for the past few decades. We now know that living cells take advantage of stochasticity in some cases and counteract stochastic effects in others. The source of intrinsic stochasticity in biomolecular systems are random timings of individual reactions, which cumulatively drive the variability in outputs of such systems. Despite the acknowledged relevance of stochasticity in the functioning of living cells no rigorous method have been proposed to precisely identify sources of variability. In this paper we propose a novel methodology that allows us to calculate contributions of individual reactions into the variability of a system's output. We demonstrate that some reactions have dramatically different effects on noise than others. Surprisingly, in the class of open conversion systems that serve as an approximate model of signal transduction, the degradation of an output contributes half of the total noise. We also demonstrate the importance of degradation in other relevant systems and propose a degradation feedback control mechanism that has the capability of an effective noise suppression. Application of our method to some well studied biochemical systems such as: gene expression, Michaelis-Menten enzyme kinetics, and the p53 system indicates that our methodology reveals an unprecedented insight into the origins of variability in biochemical systems. For many systems an analytical decomposition is not available; therefore the method has been implemented as a Matlab package and is available from the authors upon request.
1207.1615
Tim Rogers
Tim Rogers, Alan J. McKane and Axel G. Rossberg
Spontaneous genetic clustering in populations of competing organisms
9 pages, 3 figures, 2 appendices
Physical Biology, 9, 066002 (2012)
10.1088/1478-3975/9/6/066002
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce and analyse an individual-based evolutionary model, in which a population of genetically diverse organisms compete with each other for limited resources. Through theoretical analysis and stochastic simulations, we show that the model exhibits a pattern-forming instability which is highly amplified by the effects of demographic noise, leading to the spontaneous formation of genotypic clusters. This mechanism supports the thesis that stochasticity has a central role in the formation and coherence of species.
[ { "created": "Fri, 6 Jul 2012 13:03:53 GMT", "version": "v1" }, { "created": "Thu, 1 Nov 2012 10:29:10 GMT", "version": "v2" } ]
2012-11-02
[ [ "Rogers", "Tim", "" ], [ "McKane", "Alan J.", "" ], [ "Rossberg", "Axel G.", "" ] ]
We introduce and analyse an individual-based evolutionary model, in which a population of genetically diverse organisms compete with each other for limited resources. Through theoretical analysis and stochastic simulations, we show that the model exhibits a pattern-forming instability which is highly amplified by the effects of demographic noise, leading to the spontaneous formation of genotypic clusters. This mechanism supports the thesis that stochasticity has a central role in the formation and coherence of species.
2012.06249
Gabriel Maciel
Gabriel Andreguetto Maciel, Ricardo Martinez-Garcia
Enhanced species coexistence in Lotka-Volterra competition models due to nonlocal interactions
null
null
null
null
q-bio.PE nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce and analyze a spatial Lotka-Volterra competition model with local and nonlocal interactions. We study two alternative classes of nonlocal competition that differ in how each species' characteristics determine the range of the nonlocal interactions. In both cases, nonlocal interactions can create spatial patterns of population densities in which highly populated clumps alternate with unpopulated regions. This non-populated regions provide spatial niches for a weaker competitor to establish in the community and persist in conditions in which local models predict competitive exclusion. Moreover, depending on the balance between local and nonlocal competition intensity, the clumps of the weaker competitor vary from M-like structures with higher densities of individuals accumulating at the edges of each clump to triangular structures with most individuals occupying their centers. These results suggest that long-range competition, through the creation of spatial patterns in population densities, might be an important driving force behind the rich diversity of species observed in real ecological communities.
[ { "created": "Fri, 11 Dec 2020 11:24:19 GMT", "version": "v1" }, { "created": "Tue, 13 Jul 2021 16:13:46 GMT", "version": "v2" } ]
2021-07-14
[ [ "Maciel", "Gabriel Andreguetto", "" ], [ "Martinez-Garcia", "Ricardo", "" ] ]
We introduce and analyze a spatial Lotka-Volterra competition model with local and nonlocal interactions. We study two alternative classes of nonlocal competition that differ in how each species' characteristics determine the range of the nonlocal interactions. In both cases, nonlocal interactions can create spatial patterns of population densities in which highly populated clumps alternate with unpopulated regions. This non-populated regions provide spatial niches for a weaker competitor to establish in the community and persist in conditions in which local models predict competitive exclusion. Moreover, depending on the balance between local and nonlocal competition intensity, the clumps of the weaker competitor vary from M-like structures with higher densities of individuals accumulating at the edges of each clump to triangular structures with most individuals occupying their centers. These results suggest that long-range competition, through the creation of spatial patterns in population densities, might be an important driving force behind the rich diversity of species observed in real ecological communities.
q-bio/0407007
Tomomi Tao
Tomomi Tao, Hiroyuki Nakagawa, Masato Yamasaki and Hiraku Nishimori
Flexible Foraging of Ants under Unsteadily Varying Environment
16pages, 11figures, use JPSJ macro
Final version in Journal of the Physical Society of Japan, Vol.73 No.8(2004)
10.1143/JPSJ.73.2333
null
q-bio.PE
null
Using a simple model for the trail formation of ants, the relation between i)the schedule of feeding which represents the unsteady natural environment, ii)emerging patterns of trails connecting a nest with food resources, and iii)the foraging efficiency is studied. Simulations and a simple analysis show that the emergent trail pattern flexibly varies depending on the feeding schedule by which ants can make an efficient foraging according to the underlying unsteady environment.
[ { "created": "Mon, 5 Jul 2004 13:58:48 GMT", "version": "v1" } ]
2009-11-10
[ [ "Tao", "Tomomi", "" ], [ "Nakagawa", "Hiroyuki", "" ], [ "Yamasaki", "Masato", "" ], [ "Nishimori", "Hiraku", "" ] ]
Using a simple model for the trail formation of ants, the relation between i)the schedule of feeding which represents the unsteady natural environment, ii)emerging patterns of trails connecting a nest with food resources, and iii)the foraging efficiency is studied. Simulations and a simple analysis show that the emergent trail pattern flexibly varies depending on the feeding schedule by which ants can make an efficient foraging according to the underlying unsteady environment.
1203.0868
Yu Wu
Xuejuan Zhang, Jianfeng Feng
Computational modeling of neuronal networks
16 pages
null
null
null
q-bio.NC q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human brain contains about 10 billion neurons, each of which has about 10~10,000 nerve endings from which neurotransmitters are released in response to incoming spikes, and the released neurotransmitters then bind to receptors located in the postsynaptic neurons. However, individually, neurons are noisy and synaptic release is in general unreliable. But groups of neurons that are arranged in specialized modules can collectively perform complex information processing tasks robustly and reliably. How functionally groups of neurons perform behavioural related tasks crucial rely on a coherent organization of dynamics from membrane ionic kinetics to synaptic coupling of the network and dynamics of rhythmic oscillations that are tightly linked to behavioural state. To capture essential features of the biological system at multiple spatial-temporal scales, it is important to construct a suitable computational model that is closely or solely based on experimental data. Depending on what one wants to understand, these models can either be very functional and biologically realistic descriptions with thousands of coupled differential equations (Hodgkin-Huxley type) or greatly simplified caricatures (integrate-and-fire type) which are useful for studying large interconnected networks.
[ { "created": "Mon, 5 Mar 2012 11:29:15 GMT", "version": "v1" } ]
2012-03-06
[ [ "Zhang", "Xuejuan", "" ], [ "Feng", "Jianfeng", "" ] ]
Human brain contains about 10 billion neurons, each of which has about 10~10,000 nerve endings from which neurotransmitters are released in response to incoming spikes, and the released neurotransmitters then bind to receptors located in the postsynaptic neurons. However, individually, neurons are noisy and synaptic release is in general unreliable. But groups of neurons that are arranged in specialized modules can collectively perform complex information processing tasks robustly and reliably. How functionally groups of neurons perform behavioural related tasks crucial rely on a coherent organization of dynamics from membrane ionic kinetics to synaptic coupling of the network and dynamics of rhythmic oscillations that are tightly linked to behavioural state. To capture essential features of the biological system at multiple spatial-temporal scales, it is important to construct a suitable computational model that is closely or solely based on experimental data. Depending on what one wants to understand, these models can either be very functional and biologically realistic descriptions with thousands of coupled differential equations (Hodgkin-Huxley type) or greatly simplified caricatures (integrate-and-fire type) which are useful for studying large interconnected networks.
1309.1742
Laurent Pujo
Nikolai Bessonov (IPME), Ivan Demin, Polina Kurbatova (INRIA Grenoble Rh\^one-Alpes / Institut Camille Jordan), Laurent Pujo (INRIA Grenoble Rh\^one-Alpes / Institut Camille Jordan), Vitaly Volpert (INRIA Grenoble Rh\^one-Alpes / Institut Camille Jordan)
Multi-Agent Systems and Blood Cell Formation
null
Multi-Agent Systems - Modeling, Interactions, Simulations and Case Studies, InTech (Ed.) (2011) 502
10.5772/1936
null
q-bio.TO math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The objective of this chapter is to give an insight of the mathematical modellng of hematopoiesis using multi-agent systems. Several questions may arise then: what is hematopoiesis and why is it interesting to study this problem from a mathematical point of view? Has the multi-agent system approach been the only attempt done until now? What does it bring more than other techniques? What were the results obtained? What is there left to do?
[ { "created": "Thu, 5 Sep 2013 18:42:07 GMT", "version": "v1" } ]
2013-09-09
[ [ "Bessonov", "Nikolai", "", "IPME" ], [ "Demin", "Ivan", "", "INRIA Grenoble\n Rhône-Alpes / Institut Camille Jordan" ], [ "Kurbatova", "Polina", "", "INRIA Grenoble\n Rhône-Alpes / Institut Camille Jordan" ], [ "Pujo", "Laurent", "", "I...
The objective of this chapter is to give an insight of the mathematical modellng of hematopoiesis using multi-agent systems. Several questions may arise then: what is hematopoiesis and why is it interesting to study this problem from a mathematical point of view? Has the multi-agent system approach been the only attempt done until now? What does it bring more than other techniques? What were the results obtained? What is there left to do?
1311.2568
Bruce Ayati
Xiayi Wang and Bruce P. Ayati and Marc J. Brouillete and Jason M. Graham and Prem S. Ramakrishnan and James A. Martin
Modeling and Simulation of the Effects of Cyclic Loading on Articular Cartilage Lesion Formation
null
null
10.1002/cnm.2636
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a model of articular cartilage lesion formation to simulate the effects of cyclic loading. This model extends and modifies the reaction-diffusion-delay model by Graham et al. 2012 for the spread of a lesion formed though a single traumatic event. Our model represents "implicitly" the effects of loading, meaning through a cyclic sink term in the equations for live cells. Our model forms the basis for in silico studies of cartilage damage relevant to questions in osteoarthritis, for example, that may not be easily answered through in vivo or in vitro studies. Computational results are presented that indicate the impact of differing levels of EPO on articular cartilage lesion abatement.
[ { "created": "Mon, 11 Nov 2013 20:32:13 GMT", "version": "v1" } ]
2023-02-14
[ [ "Wang", "Xiayi", "" ], [ "Ayati", "Bruce P.", "" ], [ "Brouillete", "Marc J.", "" ], [ "Graham", "Jason M.", "" ], [ "Ramakrishnan", "Prem S.", "" ], [ "Martin", "James A.", "" ] ]
We present a model of articular cartilage lesion formation to simulate the effects of cyclic loading. This model extends and modifies the reaction-diffusion-delay model by Graham et al. 2012 for the spread of a lesion formed though a single traumatic event. Our model represents "implicitly" the effects of loading, meaning through a cyclic sink term in the equations for live cells. Our model forms the basis for in silico studies of cartilage damage relevant to questions in osteoarthritis, for example, that may not be easily answered through in vivo or in vitro studies. Computational results are presented that indicate the impact of differing levels of EPO on articular cartilage lesion abatement.
1702.08506
Anne Kandler
James P. O'Dwyer and Anne Kandler
Inferring processes of cultural transmission: the critical role of rare variants in distinguishing neutrality from novelty biases
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neutral evolution assumes that there are no selective forces distinguishing different variants in a population. Despite this striking assumption, many recent studies have sought to assess whether neutrality can provide a good description of different episodes of cultural change. One approach has been to test whether neutral predictions are consistent with observed progeny distributions, recording the number of variants that have produced a given number of new instances within a specified time interval: a classic example is the distribution of baby names. Using an overlapping generations model we show that these distributions consist of two phases: a power law phase with a constant exponent of -3/2, followed by an exponential cut-off for variants with very large numbers of progeny. Maximum likelihood estimations of the model parameters provide a direct way to establish whether observed empirical patterns are consistent with neutral evolution. We apply our approach to a complete data set of baby names from Australia. Crucially we show that analyses based on only the most popular variants, as is often the case in studies of cultural evolution, can provide misleading evidence for underlying transmission hypotheses. While neutrality provides a plausible description of progeny distributions of abundant variants, rare variants deviate from neutrality. Further, we develop a simulation framework that allows for the detection of alternative cultural transmission processes. We show that anti-novelty bias is able to replicate the complete progeny distribution of the Australian data set.
[ { "created": "Mon, 27 Feb 2017 20:08:48 GMT", "version": "v1" }, { "created": "Tue, 25 Apr 2017 14:03:20 GMT", "version": "v2" } ]
2017-04-26
[ [ "O'Dwyer", "James P.", "" ], [ "Kandler", "Anne", "" ] ]
Neutral evolution assumes that there are no selective forces distinguishing different variants in a population. Despite this striking assumption, many recent studies have sought to assess whether neutrality can provide a good description of different episodes of cultural change. One approach has been to test whether neutral predictions are consistent with observed progeny distributions, recording the number of variants that have produced a given number of new instances within a specified time interval: a classic example is the distribution of baby names. Using an overlapping generations model we show that these distributions consist of two phases: a power law phase with a constant exponent of -3/2, followed by an exponential cut-off for variants with very large numbers of progeny. Maximum likelihood estimations of the model parameters provide a direct way to establish whether observed empirical patterns are consistent with neutral evolution. We apply our approach to a complete data set of baby names from Australia. Crucially we show that analyses based on only the most popular variants, as is often the case in studies of cultural evolution, can provide misleading evidence for underlying transmission hypotheses. While neutrality provides a plausible description of progeny distributions of abundant variants, rare variants deviate from neutrality. Further, we develop a simulation framework that allows for the detection of alternative cultural transmission processes. We show that anti-novelty bias is able to replicate the complete progeny distribution of the Australian data set.
2007.12644
Ian Fellows
Ian E. Fellows, Rachel B. Slayton and Avi J. Hakim
The COVID-19 Pandemic, Community Mobility and the Effectiveness of Non-pharmaceutical Interventions: The United States of America, February to May 2020
null
null
null
null
q-bio.PE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: The impact of individual non-pharmaceutical interventions (NPI) such as state-wide stay-at-home orders, school closures and gathering size limitations, on the COVID-19 epidemic is unknown. Understanding the impact that above listed NPI have on disease transmission is critical for policy makers, particularly as case counts increase again in some areas. Methods: Using a Bayesian framework, we reconstructed the incidence and time-varying reproductive number (Rt) curves to investigate the relationship between Rt, individual mobility as measured by Google Community Mobility Reports, and NPI. Results: We found a strong relationship between reproductive number and mobility, with each 10% drop in mobility being associated with an expected 10.2% reduction in Rt compared to baseline. The effects of limitations on the size of gatherings, school and business closures, and stay-at-home orders were dominated by the trend over time, which was associated with a 48% decrease in the reproductive number, adjusting for the NPI. Conclusions: We found that the decrease in mobility associated with time may be due to individuals changing their behavior in response to perceived risk or external factors.
[ { "created": "Thu, 9 Jul 2020 21:18:44 GMT", "version": "v1" } ]
2020-07-27
[ [ "Fellows", "Ian E.", "" ], [ "Slayton", "Rachel B.", "" ], [ "Hakim", "Avi J.", "" ] ]
Background: The impact of individual non-pharmaceutical interventions (NPI) such as state-wide stay-at-home orders, school closures and gathering size limitations, on the COVID-19 epidemic is unknown. Understanding the impact that above listed NPI have on disease transmission is critical for policy makers, particularly as case counts increase again in some areas. Methods: Using a Bayesian framework, we reconstructed the incidence and time-varying reproductive number (Rt) curves to investigate the relationship between Rt, individual mobility as measured by Google Community Mobility Reports, and NPI. Results: We found a strong relationship between reproductive number and mobility, with each 10% drop in mobility being associated with an expected 10.2% reduction in Rt compared to baseline. The effects of limitations on the size of gatherings, school and business closures, and stay-at-home orders were dominated by the trend over time, which was associated with a 48% decrease in the reproductive number, adjusting for the NPI. Conclusions: We found that the decrease in mobility associated with time may be due to individuals changing their behavior in response to perceived risk or external factors.
2005.01375
Gopal Krishna Padhy
Gopal Krishna Padhy, Jagadeesh Panda, Ajaya Kumar Behera
Synthesis and characterization of novel benzimidazole embedded 1,3,5-trisubstituted pyrazolines as antimicrobial agents
9 Pages,2 figures, 1 table
J. Med. Chem. 51 (2008) 5243, Pharma Chem. 8 (2016) 425, RSC Adv. 6 (2016) 8303, Int. J. Mol. Sci. 13 (2012)16472, J. Chem. Res. 40 (2016) 228
10.2298/JSC160604089P
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Efficient syntheses of some new substituted pyrazoline derivatives linked to substituted benzimidazole scaffold were performed by multistep reaction sequences. All the synthesized compounds were characterized using elemental analysis and spectral studies (IR, 1D/2D NMR techniques and mass spectrometry). The synthesized compounds were screened for their antimicrobial activity against selected Gram-positive and Gram-negative bacteria, and fungi strain. The compounds with halo substituted phenyl group at C5 of the 1-phenyl pyrazoline ring (15, 16 and 17) showed significant antibacterial activity. Among the screened compounds, 17 showed most potent inhibitory activity (MIC = 64 {\mu}g mL-1) against a bacterial strain. The tested compounds werefound to be almost inactive against the fungal strain C. albicans, apart from pyrazoline-1-carbothiomide 21, which was moderately active.
[ { "created": "Mon, 4 May 2020 10:40:29 GMT", "version": "v1" } ]
2020-05-05
[ [ "Padhy", "Gopal Krishna", "" ], [ "Panda", "Jagadeesh", "" ], [ "Behera", "Ajaya Kumar", "" ] ]
Efficient syntheses of some new substituted pyrazoline derivatives linked to substituted benzimidazole scaffold were performed by multistep reaction sequences. All the synthesized compounds were characterized using elemental analysis and spectral studies (IR, 1D/2D NMR techniques and mass spectrometry). The synthesized compounds were screened for their antimicrobial activity against selected Gram-positive and Gram-negative bacteria, and fungi strain. The compounds with halo substituted phenyl group at C5 of the 1-phenyl pyrazoline ring (15, 16 and 17) showed significant antibacterial activity. Among the screened compounds, 17 showed most potent inhibitory activity (MIC = 64 {\mu}g mL-1) against a bacterial strain. The tested compounds werefound to be almost inactive against the fungal strain C. albicans, apart from pyrazoline-1-carbothiomide 21, which was moderately active.
1208.3619
Tin Nguyen
Tin Chi Nguyen, Nan Deng, Dongxiao Zhu
SASeq: A Selective and Adaptive Shrinkage Approach to Detect and Quantify Active Transcripts using RNA-Seq
The GUI software suite is freely available from http://sammate.sourceforge.net; Contact: tin.nguyenchi@wayne.edu, dzhu@wayne.edu
null
null
null
q-bio.QM cs.CE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identification and quantification of condition-specific transcripts using RNA-Seq is vital in transcriptomics research. While initial efforts using mathematical or statistical modeling of read counts or per-base exonic signal have been successful, they may suffer from model overfitting since not all the reference transcripts in a database are expressed under a specific biological condition. Standard shrinkage approaches, such as Lasso, shrink all the transcript abundances to zero in a non-discriminative manner. Thus it does not necessarily yield the set of condition-specific transcripts. Informed shrinkage approaches, using the observed exonic coverage signal, are thus desirable. Motivated by ubiquitous uncovered exonic regions in RNA-Seq data, termed as "naked exons", we propose a new computational approach that first filters out the reference transcripts not supported by splicing and paired-end reads, then followed by fitting a new mathematical model of per-base exonic coverage signal and the underlying transcripts structure. We introduce a tuning parameter to penalize the specific regions of the selected transcripts that were not supported by the naked exons. Our approach compares favorably with the selected competing methods in terms of both time complexity and accuracy using simulated and real-world data. Our method is implemented in SAMMate, a GUI software suite freely available from http://sammate.sourceforge.net
[ { "created": "Fri, 17 Aug 2012 15:37:58 GMT", "version": "v1" }, { "created": "Sat, 23 Feb 2013 00:02:29 GMT", "version": "v2" } ]
2013-02-26
[ [ "Nguyen", "Tin Chi", "" ], [ "Deng", "Nan", "" ], [ "Zhu", "Dongxiao", "" ] ]
Identification and quantification of condition-specific transcripts using RNA-Seq is vital in transcriptomics research. While initial efforts using mathematical or statistical modeling of read counts or per-base exonic signal have been successful, they may suffer from model overfitting since not all the reference transcripts in a database are expressed under a specific biological condition. Standard shrinkage approaches, such as Lasso, shrink all the transcript abundances to zero in a non-discriminative manner. Thus it does not necessarily yield the set of condition-specific transcripts. Informed shrinkage approaches, using the observed exonic coverage signal, are thus desirable. Motivated by ubiquitous uncovered exonic regions in RNA-Seq data, termed as "naked exons", we propose a new computational approach that first filters out the reference transcripts not supported by splicing and paired-end reads, then followed by fitting a new mathematical model of per-base exonic coverage signal and the underlying transcripts structure. We introduce a tuning parameter to penalize the specific regions of the selected transcripts that were not supported by the naked exons. Our approach compares favorably with the selected competing methods in terms of both time complexity and accuracy using simulated and real-world data. Our method is implemented in SAMMate, a GUI software suite freely available from http://sammate.sourceforge.net
1712.09553
Esben Jannik Bjerrum
Esben Jannik Bjerrum
DeepIEP: a Peptide Sequence Model of Isoelectric Point (IEP/pI) using Recurrent Neural Networks (RNNs)
null
null
null
null
q-bio.BM cs.LG q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The isoelectric point (IEP or pI) is the pH where the net charge on the molecular ensemble of peptides and proteins is zero. This physical-chemical property is dependent on protonable/deprotonable sidechains and their pKa values. Here an pI prediction model is trained from a database of peptide sequences and pIs using a recurrent neural network (RNN) with long short-term memory (LSTM) cells. The trained model obtains an RMSE and R$^2$ of 0.28 and 0.95 for the external test set. The model is not based on pKa values, but prediction of constructed test sequences show similar rankings as already known pKa values. The prediction depends mostly on the existence of known acidic and basic amino acids with fine-adjusted based on the neighboring sequence and position of the charged amino acids in the peptide chain.
[ { "created": "Wed, 27 Dec 2017 11:30:02 GMT", "version": "v1" } ]
2017-12-29
[ [ "Bjerrum", "Esben Jannik", "" ] ]
The isoelectric point (IEP or pI) is the pH where the net charge on the molecular ensemble of peptides and proteins is zero. This physical-chemical property is dependent on protonable/deprotonable sidechains and their pKa values. Here an pI prediction model is trained from a database of peptide sequences and pIs using a recurrent neural network (RNN) with long short-term memory (LSTM) cells. The trained model obtains an RMSE and R$^2$ of 0.28 and 0.95 for the external test set. The model is not based on pKa values, but prediction of constructed test sequences show similar rankings as already known pKa values. The prediction depends mostly on the existence of known acidic and basic amino acids with fine-adjusted based on the neighboring sequence and position of the charged amino acids in the peptide chain.
1801.07807
Ishanu Chattopadhyay
Jaideep Dhanoa, Balaji Manicassamy and Ishanu Chattopadhyay
Algorithmic Bio-surveillance For Precise Spatio-temporal Prediction of Zoonotic Emergence
8 pages, 5 figures
null
null
null
q-bio.PE cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Viral zoonoses have emerged as the key drivers of recent pandemics. Human infection by zoonotic viruses are either spillover events -- isolated infections that fail to cause a widespread contagion -- or species jumps, where successful adaptation to the new host leads to a pandemic. Despite expensive bio-surveillance efforts, historically emergence response has been reactive, and post-hoc. Here we use machine inference to demonstrate a high accuracy predictive bio-surveillance capability, designed to pro-actively localize an impending species jump via automated interrogation of massive sequence databases of viral proteins. Our results suggest that a jump might not purely be the result of an isolated unfortunate cross-infection localized in space and time; there are subtle yet detectable patterns of genotypic changes accumulating in the global viral population leading up to emergence. Using tens of thousands of protein sequences simultaneously, we train models that track maximum achievable accuracy for disambiguating host tropism from the primary structure of surface proteins, and show that the inverse classification accuracy is a quantitative indicator of jump risk. We validate our claim in the context of the 2009 swine flu outbreak, and the 2004 emergence of H5N1 subspecies of Influenza A from avian reservoirs; illustrating that interrogation of the global viral population can unambiguously track a near monotonic risk elevation over several preceding years leading to eventual emergence.
[ { "created": "Tue, 23 Jan 2018 23:27:31 GMT", "version": "v1" } ]
2018-01-25
[ [ "Dhanoa", "Jaideep", "" ], [ "Manicassamy", "Balaji", "" ], [ "Chattopadhyay", "Ishanu", "" ] ]
Viral zoonoses have emerged as the key drivers of recent pandemics. Human infection by zoonotic viruses are either spillover events -- isolated infections that fail to cause a widespread contagion -- or species jumps, where successful adaptation to the new host leads to a pandemic. Despite expensive bio-surveillance efforts, historically emergence response has been reactive, and post-hoc. Here we use machine inference to demonstrate a high accuracy predictive bio-surveillance capability, designed to pro-actively localize an impending species jump via automated interrogation of massive sequence databases of viral proteins. Our results suggest that a jump might not purely be the result of an isolated unfortunate cross-infection localized in space and time; there are subtle yet detectable patterns of genotypic changes accumulating in the global viral population leading up to emergence. Using tens of thousands of protein sequences simultaneously, we train models that track maximum achievable accuracy for disambiguating host tropism from the primary structure of surface proteins, and show that the inverse classification accuracy is a quantitative indicator of jump risk. We validate our claim in the context of the 2009 swine flu outbreak, and the 2004 emergence of H5N1 subspecies of Influenza A from avian reservoirs; illustrating that interrogation of the global viral population can unambiguously track a near monotonic risk elevation over several preceding years leading to eventual emergence.
2205.00084
Ingoo Lee
Ingoo Lee and Hojung Nam
Infusing Linguistic Knowledge of SMILES into Chemical Language Models
8 pages, 4 figures
null
null
null
q-bio.QM cs.AI cs.LG
http://creativecommons.org/publicdomain/zero/1.0/
The simplified molecular-input line-entry system (SMILES) is the most popular representation of chemical compounds. Therefore, many SMILES-based molecular property prediction models have been developed. In particular, transformer-based models show promising performance because the model utilizes a massive chemical dataset for self-supervised learning. However, there is no transformer-based model to overcome the inherent limitations of SMILES, which result from the generation process of SMILES. In this study, we grammatically parsed SMILES to obtain connectivity between substructures and their type, which is called the grammatical knowledge of SMILES. First, we pretrained the transformers with substructural tokens, which were parsed from SMILES. Then, we used the training strategy 'same compound model' to better understand SMILES grammar. In addition, we injected knowledge of connectivity and type into the transformer with knowledge adapters. As a result, our representation model outperformed previous compound representations for the prediction of molecular properties. Finally, we analyzed the attention of the transformer model and adapters, demonstrating that the proposed model understands the grammar of SMILES.
[ { "created": "Wed, 20 Apr 2022 01:25:18 GMT", "version": "v1" } ]
2022-05-03
[ [ "Lee", "Ingoo", "" ], [ "Nam", "Hojung", "" ] ]
The simplified molecular-input line-entry system (SMILES) is the most popular representation of chemical compounds. Therefore, many SMILES-based molecular property prediction models have been developed. In particular, transformer-based models show promising performance because the model utilizes a massive chemical dataset for self-supervised learning. However, there is no transformer-based model to overcome the inherent limitations of SMILES, which result from the generation process of SMILES. In this study, we grammatically parsed SMILES to obtain connectivity between substructures and their type, which is called the grammatical knowledge of SMILES. First, we pretrained the transformers with substructural tokens, which were parsed from SMILES. Then, we used the training strategy 'same compound model' to better understand SMILES grammar. In addition, we injected knowledge of connectivity and type into the transformer with knowledge adapters. As a result, our representation model outperformed previous compound representations for the prediction of molecular properties. Finally, we analyzed the attention of the transformer model and adapters, demonstrating that the proposed model understands the grammar of SMILES.
1708.04024
Etienne Thevenot
Yann Guitton (LABERCA, Institut de Recherche en G\'enie Civil et M\'ecanique), Marie Tremblay-Franco (ToxAlim), Gildas Le Corguill\'e, Jean-Fran\c{c}ois Martin (ToxAlim), M\'elanie P\'et\'era (UNH), Pierrick Roger-Mele (LIST/LADIS), Alexis Delabri\`ere (LIST/LADIS), Sophie Goulitquer, Misharl Monsoor, Christophe Duperier (UNH), C\'ecile Canlet (ToxAlim), Remi Servien (ToxAlim), Patrick Tardivel (ToxAlim), Christophe Caron, Franck Giacomoni (UNH), Etienne Th\'evenot (LIST/LADIS)
Create, run, share, publish, and reference your LC-MS, FIA-MS, GC-MS, and NMR data analysis workflows with the Workflow4Metabolomics 3.0 Galaxy online infrastructure for metabolomics
The International Journal of Biochemistry \& Cell Biology, Elsevier, 2017
null
10.1016/j.biocel.2017.07.002
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Metabolomics is a key approach in modern functional genomics and systems biology. Due to the complexity of metabolomics data, the variety of experimental designs, and the variety of existing bioinformatics tools, providing experimenters with a simple and efficient resource to conduct comprehensive and rigorous analysis of their data is of utmost importance. In 2014, we launched the Workflow4Metabolomics (W4M, http://workflow4metabolomics.org) online infrastructure for metabolomics built on the Galaxy environment, which offers user-friendly features to build and run data analysis workflows including preprocessing, statistical analysis, and annotation steps. Here we present the new W4M 3.0 release, which contains twice as many tools as the first version, and provides two features which are, to our knowledge, unique among online resources. First, data from the four major metabolomics technologies (i.e., LC-MS, FIA-MS, GC-MS, and NMR) can be analyzed on a single platform. By using three studies in human physiology, alga evolution, and animal toxicology, we demonstrate how the 40 available tools can be easily combined to address biological issues. Second, the full analysis (including the workflow, the parameter values, the input data and output results) can be referenced with a permanent digital object identifier (DOI). Publication of data analyses is of major importance for robust and reproducible science. Furthermore, the publicly shared workflows are of high-value for e-learning and training. The Workflow4Metabolomics 3.0 e-infrastructure thus not only offers a unique online environment for analysis of data from the main metabolomics technologies, but it is also the first reference repository for metabolomics workflows.
[ { "created": "Mon, 14 Aug 2017 07:36:23 GMT", "version": "v1" } ]
2017-08-15
[ [ "Guitton", "Yann", "", "LABERCA, Institut de Recherche en Génie Civil et\n Mécanique" ], [ "Tremblay-Franco", "Marie", "", "ToxAlim" ], [ "Corguillé", "Gildas Le", "", "ToxAlim" ], [ "Martin", "Jean-François", "", "ToxAlim" ], [ ...
Metabolomics is a key approach in modern functional genomics and systems biology. Due to the complexity of metabolomics data, the variety of experimental designs, and the variety of existing bioinformatics tools, providing experimenters with a simple and efficient resource to conduct comprehensive and rigorous analysis of their data is of utmost importance. In 2014, we launched the Workflow4Metabolomics (W4M, http://workflow4metabolomics.org) online infrastructure for metabolomics built on the Galaxy environment, which offers user-friendly features to build and run data analysis workflows including preprocessing, statistical analysis, and annotation steps. Here we present the new W4M 3.0 release, which contains twice as many tools as the first version, and provides two features which are, to our knowledge, unique among online resources. First, data from the four major metabolomics technologies (i.e., LC-MS, FIA-MS, GC-MS, and NMR) can be analyzed on a single platform. By using three studies in human physiology, alga evolution, and animal toxicology, we demonstrate how the 40 available tools can be easily combined to address biological issues. Second, the full analysis (including the workflow, the parameter values, the input data and output results) can be referenced with a permanent digital object identifier (DOI). Publication of data analyses is of major importance for robust and reproducible science. Furthermore, the publicly shared workflows are of high-value for e-learning and training. The Workflow4Metabolomics 3.0 e-infrastructure thus not only offers a unique online environment for analysis of data from the main metabolomics technologies, but it is also the first reference repository for metabolomics workflows.
1909.09004
Kaitlyn Phillipson
Sarah Ayman Goldrup and Kaitlyn Phillipson
Classification of Open and Closed Convex Codes on Five Neurons
null
null
null
null
q-bio.NC math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural codes, represented as collections of binary strings, encode neural activity and show relationships among stimuli. Certain neurons, called place cells, have been shown experimentally to fire in convex regions in space. A natural question to ask is: Which neural codes can arise as intersection patterns of convex sets? While past research has established several criteria, complete conditions for convexity are not yet known for codes with more than four neurons. We classify all neural codes with five neurons as convex/non-convex codes. Furthermore, we investigate which of these codes can be represented by open versus closed convex sets. Interestingly, we find a code which is an open but not closed convex code and demonstrate a minimal example for this phenomenon.
[ { "created": "Thu, 19 Sep 2019 14:02:38 GMT", "version": "v1" } ]
2019-09-20
[ [ "Goldrup", "Sarah Ayman", "" ], [ "Phillipson", "Kaitlyn", "" ] ]
Neural codes, represented as collections of binary strings, encode neural activity and show relationships among stimuli. Certain neurons, called place cells, have been shown experimentally to fire in convex regions in space. A natural question to ask is: Which neural codes can arise as intersection patterns of convex sets? While past research has established several criteria, complete conditions for convexity are not yet known for codes with more than four neurons. We classify all neural codes with five neurons as convex/non-convex codes. Furthermore, we investigate which of these codes can be represented by open versus closed convex sets. Interestingly, we find a code which is an open but not closed convex code and demonstrate a minimal example for this phenomenon.
2011.02000
Pritam Sarkar
Pritam Sarkar, Silvia Lobmaier, Bibiana Fabre, Diego Gonz\'alez, Alexander Mueller, Martin G. Frasch, Marta C. Antonelli, Ali Etemad
Detection of Maternal and Fetal Stress from the Electrocardiogram with Self-Supervised Representation Learning
ClinicalTrials.gov registration number: NCT03389178. Code repo: https://code.engineering.queensu.ca/17ps21/ssl-ecg-v2
Scientific Reports, December 2021
10.1038/s41598-021-03376-8
null
q-bio.QM eess.SP
http://creativecommons.org/licenses/by-nc-sa/4.0/
In the pregnant mother and her fetus, chronic prenatal stress results in entrainment of the fetal heartbeat by the maternal heartbeat, quantified by the fetal stress index (FSI). Deep learning (DL) is capable of pattern detection in complex medical data with high accuracy in noisy real-life environments, but little is known about DL's utility in non-invasive biometric monitoring during pregnancy. A recently established self-supervised learning (SSL) approach to DL provides emotional recognition from electrocardiogram (ECG). We hypothesized that SSL will identify chronically stressed mother-fetus dyads from the raw maternal abdominal electrocardiograms (aECG), containing fetal and maternal ECG. Chronically stressed mothers and controls matched at enrolment at 32 weeks of gestation were studied. We validated the chronic stress exposure by psychological inventory, maternal hair cortisol and FSI. We tested two variants of SSL architecture, one trained on the generic ECG features for emotional recognition obtained from public datasets and another transfer-learned on a subset of our data. Our DL models accurately detect the chronic stress exposure group (AUROC=0.982+/-0.002), the individual psychological stress score (R2=0.943+/-0.009) and FSI at 34 weeks of gestation (R2=0.946+/-0.013), as well as the maternal hair cortisol at birth reflecting chronic stress exposure (0.931+/-0.006). The best performance was achieved with the DL model trained on the public dataset and using maternal ECG alone. The present DL approach provides a novel source of physiological insights into complex multi-modal relationships between different regulatory systems exposed to chronic stress. The final DL model can be deployed in low-cost regular ECG biosensors as a simple, ubiquitous early stress detection and monitoring tool during pregnancy. This discovery should enable early behavioral interventions.
[ { "created": "Tue, 3 Nov 2020 20:41:59 GMT", "version": "v1" }, { "created": "Thu, 19 Nov 2020 02:53:33 GMT", "version": "v2" }, { "created": "Mon, 7 Dec 2020 02:18:14 GMT", "version": "v3" }, { "created": "Tue, 29 Dec 2020 19:57:30 GMT", "version": "v4" }, { "cre...
2021-12-20
[ [ "Sarkar", "Pritam", "" ], [ "Lobmaier", "Silvia", "" ], [ "Fabre", "Bibiana", "" ], [ "González", "Diego", "" ], [ "Mueller", "Alexander", "" ], [ "Frasch", "Martin G.", "" ], [ "Antonelli", "Marta C.", "" ...
In the pregnant mother and her fetus, chronic prenatal stress results in entrainment of the fetal heartbeat by the maternal heartbeat, quantified by the fetal stress index (FSI). Deep learning (DL) is capable of pattern detection in complex medical data with high accuracy in noisy real-life environments, but little is known about DL's utility in non-invasive biometric monitoring during pregnancy. A recently established self-supervised learning (SSL) approach to DL provides emotional recognition from electrocardiogram (ECG). We hypothesized that SSL will identify chronically stressed mother-fetus dyads from the raw maternal abdominal electrocardiograms (aECG), containing fetal and maternal ECG. Chronically stressed mothers and controls matched at enrolment at 32 weeks of gestation were studied. We validated the chronic stress exposure by psychological inventory, maternal hair cortisol and FSI. We tested two variants of SSL architecture, one trained on the generic ECG features for emotional recognition obtained from public datasets and another transfer-learned on a subset of our data. Our DL models accurately detect the chronic stress exposure group (AUROC=0.982+/-0.002), the individual psychological stress score (R2=0.943+/-0.009) and FSI at 34 weeks of gestation (R2=0.946+/-0.013), as well as the maternal hair cortisol at birth reflecting chronic stress exposure (0.931+/-0.006). The best performance was achieved with the DL model trained on the public dataset and using maternal ECG alone. The present DL approach provides a novel source of physiological insights into complex multi-modal relationships between different regulatory systems exposed to chronic stress. The final DL model can be deployed in low-cost regular ECG biosensors as a simple, ubiquitous early stress detection and monitoring tool during pregnancy. This discovery should enable early behavioral interventions.
2107.01246
Gabriel Hassler
Gabriel W. Hassler, Brigida Gallone, Leandro Aristide, William L. Allen, Max R. Tolkoff, Andrew J. Holbrook, Guy Baele, Philippe Lemey and Marc A. Suchard
Principled, practical, flexible, fast: a new approach to phylogenetic factor analysis
27 pages, 7 figures, 1 table
null
null
null
q-bio.PE stat.AP stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biological phenotypes are products of complex evolutionary processes in which selective forces influence multiple biological trait measurements in unknown ways. Phylogenetic factor analysis disentangles these relationships across the evolutionary history of a group of organisms. Scientists seeking to employ this modeling framework confront numerous modeling and implementation decisions, the details of which pose computational and replicability challenges. General and impactful community employment requires a data scientific analysis plan that balances flexibility, speed and ease of use, while minimizing model and algorithm tuning. Even in the presence of non-trivial phylogenetic model constraints, we show that one may analytically address latent factor uncertainty in a way that (a) aids model flexibility, (b) accelerates computation (by as much as 500-fold) and (c) decreases required tuning. We further present practical guidance on inference and modeling decisions as well as diagnosing and solving common problems in these analyses. We codify this analysis plan in an automated pipeline that distills the potentially overwhelming array of modeling decisions into a small handful of (typically binary) choices. We demonstrate the utility of these methods and analysis plan in four real-world problems of varying scales.
[ { "created": "Fri, 2 Jul 2021 19:40:45 GMT", "version": "v1" } ]
2021-07-06
[ [ "Hassler", "Gabriel W.", "" ], [ "Gallone", "Brigida", "" ], [ "Aristide", "Leandro", "" ], [ "Allen", "William L.", "" ], [ "Tolkoff", "Max R.", "" ], [ "Holbrook", "Andrew J.", "" ], [ "Baele", "Guy", "" ...
Biological phenotypes are products of complex evolutionary processes in which selective forces influence multiple biological trait measurements in unknown ways. Phylogenetic factor analysis disentangles these relationships across the evolutionary history of a group of organisms. Scientists seeking to employ this modeling framework confront numerous modeling and implementation decisions, the details of which pose computational and replicability challenges. General and impactful community employment requires a data scientific analysis plan that balances flexibility, speed and ease of use, while minimizing model and algorithm tuning. Even in the presence of non-trivial phylogenetic model constraints, we show that one may analytically address latent factor uncertainty in a way that (a) aids model flexibility, (b) accelerates computation (by as much as 500-fold) and (c) decreases required tuning. We further present practical guidance on inference and modeling decisions as well as diagnosing and solving common problems in these analyses. We codify this analysis plan in an automated pipeline that distills the potentially overwhelming array of modeling decisions into a small handful of (typically binary) choices. We demonstrate the utility of these methods and analysis plan in four real-world problems of varying scales.
1403.6869
Jonathan Potts
Jonathan R. Potts, Karl Mokross, Philip C Stouffer, Mark A. Lewis
Step selection techniques uncover the environmental predictors of space use patterns in flocks of Amazonian birds
null
Ecology and Evolution (2014) 4(24):4578-4588
10.1002/ece3.1306
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
1. Anthropogenic actions cause rapid ecological changes, meaning that animals have to respond before they have time to adapt. Tools to quantify emergent spatial patterns from animal-habitat interaction mechanisms are vital for predicting the population-level effects of such changes. 2. Environmental perturbations are particularly prevalent in the Amazon rainforest, and have a profound effect on fragmentation-sensitive insectivorous bird flocks. Therefore it is important to be able to predict the effects of such changes on the flocks' space-use patterns. 3. We use a step selection function (SSF) approach to uncover environmental drivers behind movement choices. This is used to construct a mechanistic model, from which we derive predicted utilization distributions (home ranges) of flocks. 4. We show that movement decisions are significantly influenced by canopy height and topography, but not resource depletion and renewal. We quantify the magnitude of these effects and demonstrate that they are helpful for understanding various heterogeneous aspects of space use. We compare our results to recent analytic derivations of space use, demonstrating that they are only accurate when assuming that there is no persistence in the animals' movement. 5. Our model can be translated into other environments or hypothetical scenarios, such as those given by proposed future anthropogenic actions, to make predictions of spatial patterns in bird flocks. Furthermore, our approach is quite general, so could be used to predict the effects of habitat changes on spatial patterns for a wide variety of animal communities.
[ { "created": "Wed, 26 Mar 2014 21:40:16 GMT", "version": "v1" }, { "created": "Tue, 15 Apr 2014 19:15:52 GMT", "version": "v2" }, { "created": "Tue, 13 May 2014 14:32:40 GMT", "version": "v3" } ]
2015-01-06
[ [ "Potts", "Jonathan R.", "" ], [ "Mokross", "Karl", "" ], [ "Stouffer", "Philip C", "" ], [ "Lewis", "Mark A.", "" ] ]
1. Anthropogenic actions cause rapid ecological changes, meaning that animals have to respond before they have time to adapt. Tools to quantify emergent spatial patterns from animal-habitat interaction mechanisms are vital for predicting the population-level effects of such changes. 2. Environmental perturbations are particularly prevalent in the Amazon rainforest, and have a profound effect on fragmentation-sensitive insectivorous bird flocks. Therefore it is important to be able to predict the effects of such changes on the flocks' space-use patterns. 3. We use a step selection function (SSF) approach to uncover environmental drivers behind movement choices. This is used to construct a mechanistic model, from which we derive predicted utilization distributions (home ranges) of flocks. 4. We show that movement decisions are significantly influenced by canopy height and topography, but not resource depletion and renewal. We quantify the magnitude of these effects and demonstrate that they are helpful for understanding various heterogeneous aspects of space use. We compare our results to recent analytic derivations of space use, demonstrating that they are only accurate when assuming that there is no persistence in the animals' movement. 5. Our model can be translated into other environments or hypothetical scenarios, such as those given by proposed future anthropogenic actions, to make predictions of spatial patterns in bird flocks. Furthermore, our approach is quite general, so could be used to predict the effects of habitat changes on spatial patterns for a wide variety of animal communities.
2212.13285
Binxu Wang
Binxu Wang, Carlos R. Ponce
On the Level Sets and Invariance of Neural Tuning Landscapes
24 pages, 13 figures. Published in NeurIPS 2022 Workshop on Symmetry and Geometry in Neural Representations, and PMLR volume 197
null
null
null
q-bio.NC cs.AI cs.CG cs.CV cs.NE
http://creativecommons.org/licenses/by/4.0/
Visual representations can be defined as the activations of neuronal populations in response to images. The activation of a neuron as a function over all image space has been described as a "tuning landscape". As a function over a high-dimensional space, what is the structure of this landscape? In this study, we characterize tuning landscapes through the lens of level sets and Morse theory. A recent study measured the in vivo two-dimensional tuning maps of neurons in different brain regions. Here, we developed a statistically reliable signature for these maps based on the change of topology in level sets. We found this topological signature changed progressively throughout the cortical hierarchy, with similar trends found for units in convolutional neural networks (CNNs). Further, we analyzed the geometry of level sets on the tuning landscapes of CNN units. We advanced the hypothesis that higher-order units can be locally regarded as isotropic radial basis functions, but not globally. This shows the power of level sets as a conceptual tool to understand neuronal activations over image space.
[ { "created": "Mon, 26 Dec 2022 19:53:29 GMT", "version": "v1" } ]
2022-12-29
[ [ "Wang", "Binxu", "" ], [ "Ponce", "Carlos R.", "" ] ]
Visual representations can be defined as the activations of neuronal populations in response to images. The activation of a neuron as a function over all image space has been described as a "tuning landscape". As a function over a high-dimensional space, what is the structure of this landscape? In this study, we characterize tuning landscapes through the lens of level sets and Morse theory. A recent study measured the in vivo two-dimensional tuning maps of neurons in different brain regions. Here, we developed a statistically reliable signature for these maps based on the change of topology in level sets. We found this topological signature changed progressively throughout the cortical hierarchy, with similar trends found for units in convolutional neural networks (CNNs). Further, we analyzed the geometry of level sets on the tuning landscapes of CNN units. We advanced the hypothesis that higher-order units can be locally regarded as isotropic radial basis functions, but not globally. This shows the power of level sets as a conceptual tool to understand neuronal activations over image space.
1304.5898
Kunihiko Goto
Atsushi Suenaga, Takehiko Ogura, Makoto Taiji1, Akira Toyama, Hideo Takeuchi, Mingyu Son, Kazuyoshi Takayama, Masatoshi Iwamoto, Ikuro Sato, Jay Z. Yeh, Toshio Narahashi, Haruaki Nakaya, Akihiko Konagaya, Kunihiko Goto
The atomic-level mechanism underlying the functionality of aquaporin-0
11 pages, 7 figures, 1 video (http://www.apph.tohoku-gakuin.ac.jp/iwamoto/video_S1.gif)
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
So far, more than 82,000 protein structures have been reported in the Protein Data Bank, but the driving force and structures that allow for protein functions have not been elucidated at the atomic level for even one protein. We have been able to clarify that the inter-subunit hydrophobic interaction driving the electrostatic opening of the pore in aquaporin 0 (AQP0). Aquaporins are membrane channels for water and small non-ionic solutes found in animals, plants, and microbes. The structures of aquaporins have high homology and consist of homotetramers, each monomer of which has one pore for a water channel. Each pore has two narrow portions: one is the narrowest constriction region consisting of aromatic residues and an arginine (ar/R), and another is two asparagine-proline-alanine (NPA) homolog portions. Here we show that an inter-subunit hydrophobic interaction in AQP0 drives a stick portion consisting of four amino acids toward the pore and the tip of the stick portion, consisting of a nitrogen atom, opens the pore: that movement is the swing mechanism (this http URL). The energetics and conformational change of amino acids participating in the swing mechanism confirm this view. The swing mechanism in which inter-subunit hydrophobic interactions in the tetramer drive the on-off switching of the pore explains why aquaporins consist of tetramers. Here, we report that experimental and molecular dynamics findings using various mutants support this view of the swing mechanism. The finding that mutants of amino acids in AQP2 corresponding to the stick of the swing mechanism cause severe recessive nephrogenic diabetes insipidus (NDI) demonstrates the critical role of the swing mechanism for the aquaporin function. We report first that the inter-subunit hydrophobic interaction in aquaporin 0 drives the electrostatic opening of the aquaporin pore at the atomic level.
[ { "created": "Mon, 22 Apr 2013 09:56:15 GMT", "version": "v1" } ]
2013-04-23
[ [ "Suenaga", "Atsushi", "" ], [ "Ogura", "Takehiko", "" ], [ "Taiji1", "Makoto", "" ], [ "Toyama", "Akira", "" ], [ "Takeuchi", "Hideo", "" ], [ "Son", "Mingyu", "" ], [ "Takayama", "Kazuyoshi", "" ], [ ...
So far, more than 82,000 protein structures have been reported in the Protein Data Bank, but the driving force and structures that allow for protein functions have not been elucidated at the atomic level for even one protein. We have been able to clarify that the inter-subunit hydrophobic interaction driving the electrostatic opening of the pore in aquaporin 0 (AQP0). Aquaporins are membrane channels for water and small non-ionic solutes found in animals, plants, and microbes. The structures of aquaporins have high homology and consist of homotetramers, each monomer of which has one pore for a water channel. Each pore has two narrow portions: one is the narrowest constriction region consisting of aromatic residues and an arginine (ar/R), and another is two asparagine-proline-alanine (NPA) homolog portions. Here we show that an inter-subunit hydrophobic interaction in AQP0 drives a stick portion consisting of four amino acids toward the pore and the tip of the stick portion, consisting of a nitrogen atom, opens the pore: that movement is the swing mechanism (this http URL). The energetics and conformational change of amino acids participating in the swing mechanism confirm this view. The swing mechanism in which inter-subunit hydrophobic interactions in the tetramer drive the on-off switching of the pore explains why aquaporins consist of tetramers. Here, we report that experimental and molecular dynamics findings using various mutants support this view of the swing mechanism. The finding that mutants of amino acids in AQP2 corresponding to the stick of the swing mechanism cause severe recessive nephrogenic diabetes insipidus (NDI) demonstrates the critical role of the swing mechanism for the aquaporin function. We report first that the inter-subunit hydrophobic interaction in aquaporin 0 drives the electrostatic opening of the aquaporin pore at the atomic level.
2103.05040
Breno Ferraz de Oliveira
D. Bazeia, M. Bongestab and B.F. de Oliveira
Influence of the neighborhood on cyclic models of biodiversity
22 pages, 12 figures, to appear in Physica A
Physica A 587 (2021) 126547
10.1016/j.physa.2021.126547
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
This work deals with the influence of the neighborhood in simple rock-paper-scissors models of biodiversity. We consider the case of three distinct species which evolve under the standard rules of mobility, reproduction and competition. The rule of competition follows the guidance of the rock-paper-scissors game, with the prey being annihilated, leaving an empty site in accordance with the May-Leonard proposal for the predator and prey competition. We use the von Neumann neighborhood, but we consider mobility under the presence of the first, second and third neighbors in three distinct environments, one with equal probability and the others with probability following the power law and exponential profiles. The results are different, but they all show that increasing the neighbourhood increases the characteristic length of the system in an important way. We have studied other possibilities, in particular the case where one modifies the manner a specific species competes, unveiling the interesting result in which the strongest individuals may constitute the less abundant population.
[ { "created": "Mon, 8 Mar 2021 19:52:20 GMT", "version": "v1" }, { "created": "Fri, 22 Oct 2021 12:45:00 GMT", "version": "v2" } ]
2021-11-25
[ [ "Bazeia", "D.", "" ], [ "Bongestab", "M.", "" ], [ "de Oliveira", "B. F.", "" ] ]
This work deals with the influence of the neighborhood in simple rock-paper-scissors models of biodiversity. We consider the case of three distinct species which evolve under the standard rules of mobility, reproduction and competition. The rule of competition follows the guidance of the rock-paper-scissors game, with the prey being annihilated, leaving an empty site in accordance with the May-Leonard proposal for the predator and prey competition. We use the von Neumann neighborhood, but we consider mobility under the presence of the first, second and third neighbors in three distinct environments, one with equal probability and the others with probability following the power law and exponential profiles. The results are different, but they all show that increasing the neighbourhood increases the characteristic length of the system in an important way. We have studied other possibilities, in particular the case where one modifies the manner a specific species competes, unveiling the interesting result in which the strongest individuals may constitute the less abundant population.
2110.10260
Emiliano Perez Ipi\~na
Emiliano Perez Ipi\~na and Brian A. Camley
Collective gradient sensing with limited positional information
null
null
null
null
q-bio.CB physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Eukaryotic cells sense chemical gradients to decide where and when to move. Clusters of cells can sense gradients more accurately than individual cells by integrating measurements of the concentration made across the cluster. Is this gradient sensing accuracy impeded when cells have limited knowledge of their position within the cluster, i.e. limited positional information? We apply maximum likelihood estimation to study gradient sensing accuracy of a cluster of cells with finite positional information. If cells must estimate their location within the cluster, this lowers the accuracy of collective gradient sensing. We compare our results with a tug-of-war model where cells respond to the gradient by a mechanism of collective guidance without relying on their positional information. As the cell positional uncertainty increases, there is a trade-off where the tug-of-war model responds more accurately to the chemical gradient. However, for sufficiently large cell clusters or shallow chemical gradients, the tug-of-war model will always be suboptimal to one that integrates information from all cells, even if positional uncertainty is high.
[ { "created": "Tue, 19 Oct 2021 21:03:20 GMT", "version": "v1" } ]
2021-10-22
[ [ "Ipiña", "Emiliano Perez", "" ], [ "Camley", "Brian A.", "" ] ]
Eukaryotic cells sense chemical gradients to decide where and when to move. Clusters of cells can sense gradients more accurately than individual cells by integrating measurements of the concentration made across the cluster. Is this gradient sensing accuracy impeded when cells have limited knowledge of their position within the cluster, i.e. limited positional information? We apply maximum likelihood estimation to study gradient sensing accuracy of a cluster of cells with finite positional information. If cells must estimate their location within the cluster, this lowers the accuracy of collective gradient sensing. We compare our results with a tug-of-war model where cells respond to the gradient by a mechanism of collective guidance without relying on their positional information. As the cell positional uncertainty increases, there is a trade-off where the tug-of-war model responds more accurately to the chemical gradient. However, for sufficiently large cell clusters or shallow chemical gradients, the tug-of-war model will always be suboptimal to one that integrates information from all cells, even if positional uncertainty is high.
1305.6975
Giancarlo De Luca
Giancarlo De Luca, Patrizio Mariani, Brian R. MacKenzie, Matteo Marsili
Fishing out collective memory of migratory schools
null
J. R. Soc. Interface vol. 11 no. 95 20140043 2014
10.1098/rsif.2014.0043
null
q-bio.QM cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Animals form groups for many reasons but there are costs and benefit associated with group formation. One of the benefits is collective memory. In groups on the move, social interactions play a crucial role in the cohesion and the ability to make consensus decisions. When migrating from spawning to feeding areas fish schools need to retain a collective memory of the destination site over thousand of kilometers and changes in group formation or individual preference can produce sudden changes in migration pathways. We propose a modelling framework, based on stochastic adaptive networks, that can reproduce this collective behaviour. We assume that three factors control group formation and school migration behaviour: the intensity of social interaction, the relative number of informed individuals and the preference that each individual has for the particular migration area. We treat these factors independently and relate the individuals' preferences to the experience and memory for certain migration sites. We demonstrate that removal of knowledgable individuals or alteration of individual preference can produce rapid changes in group formation and collective behavior. For example, intensive fishing targeting the migratory species and also their preferred prey can reduce both terms to a point at which migration to the destination sites is suddenly stopped. The conceptual approaches represented by our modelling framework may therefore be able to explain large-scale changes in fish migration and spatial distribution.
[ { "created": "Wed, 29 May 2013 23:21:29 GMT", "version": "v1" }, { "created": "Thu, 20 Mar 2014 12:17:35 GMT", "version": "v2" } ]
2014-03-24
[ [ "De Luca", "Giancarlo", "" ], [ "Mariani", "Patrizio", "" ], [ "MacKenzie", "Brian R.", "" ], [ "Marsili", "Matteo", "" ] ]
Animals form groups for many reasons but there are costs and benefit associated with group formation. One of the benefits is collective memory. In groups on the move, social interactions play a crucial role in the cohesion and the ability to make consensus decisions. When migrating from spawning to feeding areas fish schools need to retain a collective memory of the destination site over thousand of kilometers and changes in group formation or individual preference can produce sudden changes in migration pathways. We propose a modelling framework, based on stochastic adaptive networks, that can reproduce this collective behaviour. We assume that three factors control group formation and school migration behaviour: the intensity of social interaction, the relative number of informed individuals and the preference that each individual has for the particular migration area. We treat these factors independently and relate the individuals' preferences to the experience and memory for certain migration sites. We demonstrate that removal of knowledgable individuals or alteration of individual preference can produce rapid changes in group formation and collective behavior. For example, intensive fishing targeting the migratory species and also their preferred prey can reduce both terms to a point at which migration to the destination sites is suddenly stopped. The conceptual approaches represented by our modelling framework may therefore be able to explain large-scale changes in fish migration and spatial distribution.
2004.08859
Om Damani
Jayendran Venkateswaran and Om Damani
Effectiveness of Testing, Tracing, Social Distancing and Hygiene in Tackling Covid-19 in India: A System Dynamics Model
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a System Dynamics (SD) model of the Covid-19 pandemic spread in India. The detailed age-structured compartment-based model endogenously captures various disease transmission pathways, expanding significantly from the standard SEIR model. The model is customized for India by using the appropriate population pyramid, contact rate matrices, external arrivals (as per actual data), and a few other calibrated fractions based on the reported cases of Covid-19 in India. Also, we have explicitly modeled, using independent time-variant levers, the effects of testing, contact tracing, isolating Covid-positive patients, quarantining, use of mask/better hygiene practices, social distancing through contact rate reductions at distinct zones of home(H), work(W), school(S) and other(O) locations. Simulation results show that, even after an extended lock-down, some non-trivial number of infections (even asymptomatic) will be left and the pandemic will resurface. Only tools that work against the pandemic is high rate of testing of those who show Covid-19 like symptoms, isolating them if they are positive and contact tracing all contacts of positive patients and quarantining them, in combination with use of face masks and personal hygiene. A wide range of combination of effectiveness of contact tracing, isolation, quarantining and personal hygiene measures help minimize the pandemic impact and some imperfections in implementation of one measure can be compensated by better implementation of other measures.
[ { "created": "Sun, 19 Apr 2020 14:18:04 GMT", "version": "v1" } ]
2020-04-21
[ [ "Venkateswaran", "Jayendran", "" ], [ "Damani", "Om", "" ] ]
We present a System Dynamics (SD) model of the Covid-19 pandemic spread in India. The detailed age-structured compartment-based model endogenously captures various disease transmission pathways, expanding significantly from the standard SEIR model. The model is customized for India by using the appropriate population pyramid, contact rate matrices, external arrivals (as per actual data), and a few other calibrated fractions based on the reported cases of Covid-19 in India. Also, we have explicitly modeled, using independent time-variant levers, the effects of testing, contact tracing, isolating Covid-positive patients, quarantining, use of mask/better hygiene practices, social distancing through contact rate reductions at distinct zones of home(H), work(W), school(S) and other(O) locations. Simulation results show that, even after an extended lock-down, some non-trivial number of infections (even asymptomatic) will be left and the pandemic will resurface. Only tools that work against the pandemic is high rate of testing of those who show Covid-19 like symptoms, isolating them if they are positive and contact tracing all contacts of positive patients and quarantining them, in combination with use of face masks and personal hygiene. A wide range of combination of effectiveness of contact tracing, isolation, quarantining and personal hygiene measures help minimize the pandemic impact and some imperfections in implementation of one measure can be compensated by better implementation of other measures.
2401.01367
Fan Xinxian
Madiha Fatima, Zhihua Cao, Aichun Huang, Shengyuan Wu, Xinxian Fan, Yi Wang, Liu Jiren, Ziyun Zhu, Qiongrou Ye, Yuan Ma, Joseph K.F Chow, Peng Jia, Yangshou Liu, Yubin Lin, Manjun Ye, Tong Wu, Zhixun Li, Cong Cai, Wenhai Zhang, Cheris H.Q. Ding, Yuanzhe Cai, Feijuan Huang
Guidelines in Wastewater-based Epidemiology of SARS-CoV-2 with Diagnosis
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the global spread and increasing transmission rate of SARS-CoV-2, more and more laboratories and researchers are turning their attention to wastewater-based epidemiology (WBE), hoping it can become an effective tool for large-scale testing and provide more ac-curate predictions of the number of infected individuals. Based on the cases of sewage sampling and testing in some regions such as Hong Kong, Brazil, and the United States, the feasibility of detecting the novel coronavirus in sewage is extremely high. This study re-views domestic and international achievements in detecting SARS-CoV-2 through WBE and summarizes four aspects of COVID-19, including sampling methods, virus decay rate cal-culation, standardized population coverage of the watershed, algorithm prediction, and provides ideas for combining field modeling with epidemic prevention and control. Moreover, we highlighted some diagnostic techniques for detection of the virus from sew-age sample. Our review is a new approach in identification of the research gaps in waste water-based epidemiology and diagnosis and we also predict the future prospect of our analysis.
[ { "created": "Tue, 26 Dec 2023 13:52:49 GMT", "version": "v1" } ]
2024-01-04
[ [ "Fatima", "Madiha", "" ], [ "Cao", "Zhihua", "" ], [ "Huang", "Aichun", "" ], [ "Wu", "Shengyuan", "" ], [ "Fan", "Xinxian", "" ], [ "Wang", "Yi", "" ], [ "Jiren", "Liu", "" ], [ "Zhu", "Ziyun",...
With the global spread and increasing transmission rate of SARS-CoV-2, more and more laboratories and researchers are turning their attention to wastewater-based epidemiology (WBE), hoping it can become an effective tool for large-scale testing and provide more ac-curate predictions of the number of infected individuals. Based on the cases of sewage sampling and testing in some regions such as Hong Kong, Brazil, and the United States, the feasibility of detecting the novel coronavirus in sewage is extremely high. This study re-views domestic and international achievements in detecting SARS-CoV-2 through WBE and summarizes four aspects of COVID-19, including sampling methods, virus decay rate cal-culation, standardized population coverage of the watershed, algorithm prediction, and provides ideas for combining field modeling with epidemic prevention and control. Moreover, we highlighted some diagnostic techniques for detection of the virus from sew-age sample. Our review is a new approach in identification of the research gaps in waste water-based epidemiology and diagnosis and we also predict the future prospect of our analysis.
1901.04214
Manlio De Domenico
Paolo Bosetti, Piero Poletti, Massimo Stella, Bruno Lepri, Stefano Merler, Manlio De Domenico
Reducing measles risk in Turkey through social integration of Syrian refugees
27 pages, 5 figures
null
null
null
q-bio.PE cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Turkey hosts almost 3.5M refugees and has to face a humanitarian emergency of unprecedented levels. We use mobile phone data to map the mobility patterns of both Turkish and Syrian refugees, and use these patterns to build data-driven computational models for quantifying the risk of epidemics spreading for measles -- a disease having a satisfactory immunization coverage in Turkey but not in Syria, due to the recent civil war -- while accounting for hypothetical policies to integrate the refugees with the Turkish population. Our results provide quantitative evidence that policies to enhance social integration between refugees and the hosting population would reduce the transmission potential of measles by almost 50%, preventing the onset of widespread large epidemics in the country. Our results suggest that social segregation does not hamper but rather boosts potential outbreaks of measles to a greater extent in Syrian refugees but also in Turkish citizens, although to a lesser extent. This is due to the fact that the high immunization coverage of Turkish citizens can shield Syrian refugees from getting exposed to the infection and this in turn reduces potential sources of infection and spillover of cases among Turkish citizens as well, in a virtuous cycle reminiscent of herd immunity.
[ { "created": "Mon, 14 Jan 2019 09:53:01 GMT", "version": "v1" } ]
2019-01-15
[ [ "Bosetti", "Paolo", "" ], [ "Poletti", "Piero", "" ], [ "Stella", "Massimo", "" ], [ "Lepri", "Bruno", "" ], [ "Merler", "Stefano", "" ], [ "De Domenico", "Manlio", "" ] ]
Turkey hosts almost 3.5M refugees and has to face a humanitarian emergency of unprecedented levels. We use mobile phone data to map the mobility patterns of both Turkish and Syrian refugees, and use these patterns to build data-driven computational models for quantifying the risk of epidemics spreading for measles -- a disease having a satisfactory immunization coverage in Turkey but not in Syria, due to the recent civil war -- while accounting for hypothetical policies to integrate the refugees with the Turkish population. Our results provide quantitative evidence that policies to enhance social integration between refugees and the hosting population would reduce the transmission potential of measles by almost 50%, preventing the onset of widespread large epidemics in the country. Our results suggest that social segregation does not hamper but rather boosts potential outbreaks of measles to a greater extent in Syrian refugees but also in Turkish citizens, although to a lesser extent. This is due to the fact that the high immunization coverage of Turkish citizens can shield Syrian refugees from getting exposed to the infection and this in turn reduces potential sources of infection and spillover of cases among Turkish citizens as well, in a virtuous cycle reminiscent of herd immunity.
0710.4001
Masashi Tachikawa
Masashi Tachikawa
Fluctuation induces evolutionary branching in a modeled microbial ecosystem
4 pages, 4 figures. Submitted to Physical Review Letters
null
10.1371/journal.pone.0003925
null
q-bio.PE
null
The impact of environmental fluctuation on species diversity is studied with a model of the evolutionary ecology of microorganisms. We show that environmental fluctuation induces evolutionary branching and assures the consequential coexistence of multiple species. Pairwise invasibility analysis is applied to illustrate the speciation process. We also discuss how fluctuation affects species diversity.
[ { "created": "Mon, 22 Oct 2007 09:23:52 GMT", "version": "v1" } ]
2015-05-13
[ [ "Tachikawa", "Masashi", "" ] ]
The impact of environmental fluctuation on species diversity is studied with a model of the evolutionary ecology of microorganisms. We show that environmental fluctuation induces evolutionary branching and assures the consequential coexistence of multiple species. Pairwise invasibility analysis is applied to illustrate the speciation process. We also discuss how fluctuation affects species diversity.
q-bio/0511018
Josh Mitteldorf PhD
Josh Mitteldorf
Another Way to Calculate Fitness from Life History Variables: Solution of the Age-Structured Logistic Equation
12 pages, 2 figures
null
null
null
q-bio.PE
null
r-selection refers to evolutionary competition in the rate of a population's exponential increase. This is contrasted with K-selection, in which populations in steady-state compete in efficiency of resource conversion. Evolution in nature is thought to combine these two in various proportions. But in modeling the evolution of life histories, theorists have used r-selection exclusively; up until now, there has not been a practical algorithm for computing the target function of K-selection. The Malthusian parameter, as computed from the Euler-Lotka equation, is a quantitative rendering of the r in r-selection, computed from the fundamental life history variables mortality and fertility. Herein, a quantitative formulation of K is derived in similar terms. The basis for our model is the logistic equation which, we argue, applies more generally than is commonly appreciated. Support is offered for the utility of this paradigm, and one example computation is exhibited, in which K-selection appears to support pleiotropic explanations for senescence only one fourth as well as r-selection.
[ { "created": "Mon, 14 Nov 2005 23:28:57 GMT", "version": "v1" } ]
2007-05-23
[ [ "Mitteldorf", "Josh", "" ] ]
r-selection refers to evolutionary competition in the rate of a population's exponential increase. This is contrasted with K-selection, in which populations in steady-state compete in efficiency of resource conversion. Evolution in nature is thought to combine these two in various proportions. But in modeling the evolution of life histories, theorists have used r-selection exclusively; up until now, there has not been a practical algorithm for computing the target function of K-selection. The Malthusian parameter, as computed from the Euler-Lotka equation, is a quantitative rendering of the r in r-selection, computed from the fundamental life history variables mortality and fertility. Herein, a quantitative formulation of K is derived in similar terms. The basis for our model is the logistic equation which, we argue, applies more generally than is commonly appreciated. Support is offered for the utility of this paradigm, and one example computation is exhibited, in which K-selection appears to support pleiotropic explanations for senescence only one fourth as well as r-selection.
1309.1910
Nicolas Le Nov\`ere
Claudine Chaouiya, Duncan Berenguier, Sarah M Keating, Aurelien Naldi, Martijn P. van Iersel, Nicolas Rodriguez, Andreas Dr\"ager, Finja B\"uchel, Thomas Cokelaer, Bryan Kowal, Benjamin Wicks, Emanuel Gon\c{c}alves, Julien Dorier, Michel Page, Pedro T. Monteiro, Axel von Kamp, Ioannis Xenarios, Hidde de Jong, Michael Hucka, Steffen Klamt, Denis Thieffry, Nicolas Le Nov\`ere, Julio Saez-Rodriguez, Tom\'a\v{s} Helikar
SBML Qualitative Models: a model representation format and infrastructure to foster interactions between qualitative modelling formalisms and tools
29 pages, 7 figures
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Qualitative frameworks, especially those based on the logical discrete formalism, are increasingly used to model regulatory and signalling networks. A major advantage of these frameworks is that they do not require precise quantitative data, and that they are well-suited for studies of large networks. While numerous groups have developed specific computational tools that provide original methods to analyse qualitative models, a standard format to exchange qualitative models has been missing. Results: We present the System Biology Markup Language (SBML) Qualitative Models Package ("qual"), an extension of the SBML Level 3 standard designed for computer representation of qualitative models of biological networks. We demonstrate the interoperability of models via SBML qual through the analysis of a specific signalling network by three independent software tools. Furthermore, the cooperative development of the SBML qual format paved the way for the development of LogicalModel, an open-source model library, which will facilitate the adoption of the format as well as the collaborative development of algorithms to analyze qualitative models. Conclusion: SBML qual allows the exchange of qualitative models among a number of complementary software tools. SBML qual has the potential to promote collaborative work on the development of novel computational approaches, as well as on the specification and the analysis of comprehensive qualitative models of regulatory and signalling networks.
[ { "created": "Sat, 7 Sep 2013 21:34:22 GMT", "version": "v1" } ]
2013-09-10
[ [ "Chaouiya", "Claudine", "" ], [ "Berenguier", "Duncan", "" ], [ "Keating", "Sarah M", "" ], [ "Naldi", "Aurelien", "" ], [ "van Iersel", "Martijn P.", "" ], [ "Rodriguez", "Nicolas", "" ], [ "Dräger", "Andreas"...
Background: Qualitative frameworks, especially those based on the logical discrete formalism, are increasingly used to model regulatory and signalling networks. A major advantage of these frameworks is that they do not require precise quantitative data, and that they are well-suited for studies of large networks. While numerous groups have developed specific computational tools that provide original methods to analyse qualitative models, a standard format to exchange qualitative models has been missing. Results: We present the System Biology Markup Language (SBML) Qualitative Models Package ("qual"), an extension of the SBML Level 3 standard designed for computer representation of qualitative models of biological networks. We demonstrate the interoperability of models via SBML qual through the analysis of a specific signalling network by three independent software tools. Furthermore, the cooperative development of the SBML qual format paved the way for the development of LogicalModel, an open-source model library, which will facilitate the adoption of the format as well as the collaborative development of algorithms to analyze qualitative models. Conclusion: SBML qual allows the exchange of qualitative models among a number of complementary software tools. SBML qual has the potential to promote collaborative work on the development of novel computational approaches, as well as on the specification and the analysis of comprehensive qualitative models of regulatory and signalling networks.
1910.14313
Sara Bonetti
Xue Feng, Sara Bonetti, Amilcare Porporato
Bridging discrete and continuous formalisms for biodiversity quantification
null
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several theoretical frameworks have been proposed to explain observed biodiversity patterns, ranging from the classical niche-based theories, mainly employing a continuous formalism, to neutral theories, based on statistical mechanics of discrete communities. Differences in the descriptions of biodiversity can arise due to the discrete or continuous nature of the underlying models and the way internal or external perturbations appear in their formulations. Here, we trace the effects of stochastic population dynamics on biodiversity, from the scale of the individuals to the community and based on both discrete and continuous representations of the system, by consistently using measures of community diversity like the species abundance distribution and the rank abundance curve and applying them to both discrete and continuous populations. A novel measure, the community abundance distribution, is introduced to facilitate the comparison across different levels of description, from microscopic to macroscopic. Using a simple birth and death process and an interacting population model, we highlight discrepancies in their discrete and continuous distributions and discuss relevant implications for the analysis of rare species and extinction dynamics. Quantitative consideration of these issues is useful for better understanding of the contributions of non-neutral processes and the mathematical approximations to various measures of biodiversity.
[ { "created": "Thu, 31 Oct 2019 08:53:38 GMT", "version": "v1" } ]
2019-11-01
[ [ "Feng", "Xue", "" ], [ "Bonetti", "Sara", "" ], [ "Porporato", "Amilcare", "" ] ]
Several theoretical frameworks have been proposed to explain observed biodiversity patterns, ranging from the classical niche-based theories, mainly employing a continuous formalism, to neutral theories, based on statistical mechanics of discrete communities. Differences in the descriptions of biodiversity can arise due to the discrete or continuous nature of the underlying models and the way internal or external perturbations appear in their formulations. Here, we trace the effects of stochastic population dynamics on biodiversity, from the scale of the individuals to the community and based on both discrete and continuous representations of the system, by consistently using measures of community diversity like the species abundance distribution and the rank abundance curve and applying them to both discrete and continuous populations. A novel measure, the community abundance distribution, is introduced to facilitate the comparison across different levels of description, from microscopic to macroscopic. Using a simple birth and death process and an interacting population model, we highlight discrepancies in their discrete and continuous distributions and discuss relevant implications for the analysis of rare species and extinction dynamics. Quantitative consideration of these issues is useful for better understanding of the contributions of non-neutral processes and the mathematical approximations to various measures of biodiversity.
1607.03794
Haleh Ebadi
Haleh Ebadi, Meghdad Saeedian, Marcel Ausloos, and GholamReza Jafari
Effect of memory in non-Markovian Boolean networks
null
Europhys. Lett. 116 (2016) 30004
10.1209/0295-5075/116/30004
null
q-bio.MN nlin.CG physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One successful model of interacting biological systems is the Boolean network. The dynamics of a Boolean network, controlled with Boolean functions, is usually considered to be a Markovian (memory-less) process. However, both self organizing features of biological phenomena and their intelligent nature should raise some doubt about ignoring the history of their time evolution. Here, we extend the Boolean network Markovian approach: we involve the effect of memory on the dynamics. This can be explored by modifying Boolean functions into non-Markovian functions, for example, by investigating the usual non-Markovian threshold function, - one of the most applied Boolean functions. By applying the non-Markovian threshold function on the dynamical process of a cell cycle network, we discover a power law memory with a more robust dynamics than the Markovian dynamics.
[ { "created": "Wed, 13 Jul 2016 15:35:40 GMT", "version": "v1" } ]
2017-08-30
[ [ "Ebadi", "Haleh", "" ], [ "Saeedian", "Meghdad", "" ], [ "Ausloos", "Marcel", "" ], [ "Jafari", "GholamReza", "" ] ]
One successful model of interacting biological systems is the Boolean network. The dynamics of a Boolean network, controlled with Boolean functions, is usually considered to be a Markovian (memory-less) process. However, both self organizing features of biological phenomena and their intelligent nature should raise some doubt about ignoring the history of their time evolution. Here, we extend the Boolean network Markovian approach: we involve the effect of memory on the dynamics. This can be explored by modifying Boolean functions into non-Markovian functions, for example, by investigating the usual non-Markovian threshold function, - one of the most applied Boolean functions. By applying the non-Markovian threshold function on the dynamical process of a cell cycle network, we discover a power law memory with a more robust dynamics than the Markovian dynamics.
2212.00899
Yaroslav Ispolatov
Yaroslav Ispolatov, Carlos Doebeli, and Michael Doebeli
On the evolutionary emergence of predation
24 pages, 5 figures, typos corrected
null
null
null
q-bio.PE cond-mat.stat-mech
http://creativecommons.org/licenses/by/4.0/
In models for the evolution of predation from initially purely competitive species interactions, the propensity of predation is most often assumed to be a direct consequence of the relative morphological and physiological traits of interacting species. Here we explore a model in which predation ability is an independently evolving phenotypic feature, so that even when the relative morphological or physiological traits allow for predation, predation only occurs if the predation ability of individuals has independently evolved to high enough values. In addition to delineating the conditions for the evolutionary emergence of predation, the model reproduces stationary and non-stationary multilevel food webs with the top predators not necessarily having size superiority.
[ { "created": "Thu, 1 Dec 2022 22:34:36 GMT", "version": "v1" }, { "created": "Fri, 7 Jul 2023 19:35:59 GMT", "version": "v2" } ]
2023-07-11
[ [ "Ispolatov", "Yaroslav", "" ], [ "Doebeli", "Carlos", "" ], [ "Doebeli", "Michael", "" ] ]
In models for the evolution of predation from initially purely competitive species interactions, the propensity of predation is most often assumed to be a direct consequence of the relative morphological and physiological traits of interacting species. Here we explore a model in which predation ability is an independently evolving phenotypic feature, so that even when the relative morphological or physiological traits allow for predation, predation only occurs if the predation ability of individuals has independently evolved to high enough values. In addition to delineating the conditions for the evolutionary emergence of predation, the model reproduces stationary and non-stationary multilevel food webs with the top predators not necessarily having size superiority.
q-bio/0406016
Manuel Middendorf
Manuel Middendorf, Anshul Kundaje, Chris Wiggins, Yoav Freund, and Christina Leslie
Predicting Genetic Regulatory Response using Classification: Yeast Stress Response
Supplementary website: http://www.cs.columbia.edu/compbio/geneclass
Proceedings of the First Annual RECOMB Regulation Workshop 2004
null
null
q-bio.QM
null
We present a novel classification-based algorithm called GeneClass for learning to predict gene regulatory response. Our approach is motivated by the hypothesis that in simple organisms such as Saccharomyces cerevisiae, we can learn a decision rule for predicting whether a gene is up- or down-regulated in a particular experiment based on (1) the presence of binding site subsequences (``motifs'') in the gene's regulatory region and (2) the expression levels of regulators such as transcription factors in the experiment (``parents''). Thus our learning task integrates two qualitatively different data sources: genome-wide cDNA microarray data across multiple perturbation and mutant experiments along with motif profile data from regulatory sequences. Rather than focusing on the regression task of predicting real-valued gene expression measurements, GeneClass performs the classification task of predicting +1 and -1 labels, corresponding to up- and down-regulation beyond the levels of biological and measurement noise in microarray measurements. GeneClass uses the Adaboost learning algorithm with a margin-based generalization of decision trees called alternating decision trees. In computational experiments based on the Gasch S. cerevisiae dataset, we show that the GeneClass method predicts up- and down-regulation on held-out experiments with high accuracy. We explore a range of experimental setups related to environmental stress response, and we retrieve important regulators, binding site motifs, and relationships between regulators and binding sites that are known to be associated to specific stress response pathways. Our method thus provides predictive hypotheses, suggests biological experiments, and provides interpretable insight into the structure of genetic regulatory networks.
[ { "created": "Mon, 7 Jun 2004 09:05:39 GMT", "version": "v1" }, { "created": "Tue, 8 Jun 2004 14:46:56 GMT", "version": "v2" } ]
2007-05-23
[ [ "Middendorf", "Manuel", "" ], [ "Kundaje", "Anshul", "" ], [ "Wiggins", "Chris", "" ], [ "Freund", "Yoav", "" ], [ "Leslie", "Christina", "" ] ]
We present a novel classification-based algorithm called GeneClass for learning to predict gene regulatory response. Our approach is motivated by the hypothesis that in simple organisms such as Saccharomyces cerevisiae, we can learn a decision rule for predicting whether a gene is up- or down-regulated in a particular experiment based on (1) the presence of binding site subsequences (``motifs'') in the gene's regulatory region and (2) the expression levels of regulators such as transcription factors in the experiment (``parents''). Thus our learning task integrates two qualitatively different data sources: genome-wide cDNA microarray data across multiple perturbation and mutant experiments along with motif profile data from regulatory sequences. Rather than focusing on the regression task of predicting real-valued gene expression measurements, GeneClass performs the classification task of predicting +1 and -1 labels, corresponding to up- and down-regulation beyond the levels of biological and measurement noise in microarray measurements. GeneClass uses the Adaboost learning algorithm with a margin-based generalization of decision trees called alternating decision trees. In computational experiments based on the Gasch S. cerevisiae dataset, we show that the GeneClass method predicts up- and down-regulation on held-out experiments with high accuracy. We explore a range of experimental setups related to environmental stress response, and we retrieve important regulators, binding site motifs, and relationships between regulators and binding sites that are known to be associated to specific stress response pathways. Our method thus provides predictive hypotheses, suggests biological experiments, and provides interpretable insight into the structure of genetic regulatory networks.
1402.1794
Jesse Meyer
Jesse G. Meyer
In silico Proteome Cleavage Reveals Iterative Digestion Strategy for High Sequence Coverage
10 pages of text/references followed by figure/table legends, six figures, and one table
ISRN Computational Biology 2014
10.1155/2014/960902
null
q-bio.GN cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the post-genome era, biologists have sought to measure the complete complement of proteins, termed proteomics. Currently, the most effective method to measure the proteome is with shotgun, or bottom-up, proteomics, in which the proteome is digested into peptides that are identified followed by protein inference. Despite continuous improvements to all steps of the shotgun proteomics workflow, observed proteome coverage is often low; some proteins are identified by a single peptide sequence. Complete proteome sequence coverage would allow comprehensive characterization of RNA splicing variants and all post translational modifications, which would drastically improve the accuracy of biological models. There are many reasons for the sequence coverage deficit, but ultimately peptide length determines sequence observability. Peptides that are too short are lost because they match many protein sequences and their true origin is ambiguous. The maximum observable peptide length is determined by several analytical challenges. This paper explores computationally how peptide lengths produced from several common proteome digestion methods limit observable proteome coverage. Iterative proteome cleavage strategies are also explored. These simulations reveal that maximized proteome coverage can be achieved by use of an iterative digestion protocol involving multiple proteases and chemical cleavages that theoretically allow 91.1% proteome coverage.
[ { "created": "Fri, 7 Feb 2014 23:13:48 GMT", "version": "v1" } ]
2018-11-30
[ [ "Meyer", "Jesse G.", "" ] ]
In the post-genome era, biologists have sought to measure the complete complement of proteins, termed proteomics. Currently, the most effective method to measure the proteome is with shotgun, or bottom-up, proteomics, in which the proteome is digested into peptides that are identified followed by protein inference. Despite continuous improvements to all steps of the shotgun proteomics workflow, observed proteome coverage is often low; some proteins are identified by a single peptide sequence. Complete proteome sequence coverage would allow comprehensive characterization of RNA splicing variants and all post translational modifications, which would drastically improve the accuracy of biological models. There are many reasons for the sequence coverage deficit, but ultimately peptide length determines sequence observability. Peptides that are too short are lost because they match many protein sequences and their true origin is ambiguous. The maximum observable peptide length is determined by several analytical challenges. This paper explores computationally how peptide lengths produced from several common proteome digestion methods limit observable proteome coverage. Iterative proteome cleavage strategies are also explored. These simulations reveal that maximized proteome coverage can be achieved by use of an iterative digestion protocol involving multiple proteases and chemical cleavages that theoretically allow 91.1% proteome coverage.
2005.02182
KongFatt Wong-Lin
Alok Joshi, Da-Hui Wang, Steven Watterson, Paula L. McClean, Chandan K. Behera, Trevor Sharp and KongFatt Wong-Lin
Opportunities for multiscale computational modelling of serotonergic drug effects in Alzheimer's disease
Accepted manuscript in Neuropharmacology
null
10.1016/j.neuropharm.2020.108118
null
q-bio.SC q-bio.MN q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Alzheimer's disease (AD) is an age-specific neurodegenerative disease that compromises cognitive functioning and impacts the quality of life of an individual. Pathologically, AD is characterised by abnormal accumulation of beta-amyloid (A$\beta$) and hyperphosphorylated tau protein. Despite research advances over the last few decades, there is currently still no cure for AD. Although, medications are available to control some behavioural symptoms and slow the disease's progression, most prescribed medications are based on cholinesterase inhibitors. Over the last decade, there has been increased attention towards novel drugs, targeting alternative neurotransmitter pathways, particularly those targeting serotonergic (5-HT) system. In this review, we focused on 5-HT receptor (5-HTR) mediated signalling and drugs that target these receptors. These pathways regulate key proteins and kinases such as GSK-3 that are associated with abnormal levels of A$\beta$ and tau in AD. We then review computational studies related to 5-HT signalling pathways with the potential for providing deeper understanding of AD pathologies. In particular, we suggest that multiscale and multilevel modelling approaches could potentially provide new insights into AD mechanisms, and towards discovering novel 5-HTR based therapeutic targets.
[ { "created": "Tue, 5 May 2020 13:55:23 GMT", "version": "v1" }, { "created": "Wed, 27 May 2020 23:49:05 GMT", "version": "v2" } ]
2020-05-29
[ [ "Joshi", "Alok", "" ], [ "Wang", "Da-Hui", "" ], [ "Watterson", "Steven", "" ], [ "McClean", "Paula L.", "" ], [ "Behera", "Chandan K.", "" ], [ "Sharp", "Trevor", "" ], [ "Wong-Lin", "KongFatt", "" ] ]
Alzheimer's disease (AD) is an age-specific neurodegenerative disease that compromises cognitive functioning and impacts the quality of life of an individual. Pathologically, AD is characterised by abnormal accumulation of beta-amyloid (A$\beta$) and hyperphosphorylated tau protein. Despite research advances over the last few decades, there is currently still no cure for AD. Although, medications are available to control some behavioural symptoms and slow the disease's progression, most prescribed medications are based on cholinesterase inhibitors. Over the last decade, there has been increased attention towards novel drugs, targeting alternative neurotransmitter pathways, particularly those targeting serotonergic (5-HT) system. In this review, we focused on 5-HT receptor (5-HTR) mediated signalling and drugs that target these receptors. These pathways regulate key proteins and kinases such as GSK-3 that are associated with abnormal levels of A$\beta$ and tau in AD. We then review computational studies related to 5-HT signalling pathways with the potential for providing deeper understanding of AD pathologies. In particular, we suggest that multiscale and multilevel modelling approaches could potentially provide new insights into AD mechanisms, and towards discovering novel 5-HTR based therapeutic targets.
1810.02391
Selim Kalayci
Selim Kalayc{\i}, \c{C}a\u{g}atay Demiralp, Zeynep H. G\"um\"u\c{s}
Developing Design Guidelines for Precision Oncology Reports
main text (4 pages) including 2 figures, plus 4 additional supplementary documents merged in a single PDF file
null
null
null
q-bio.TO cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Precision oncology tests that profile tumors to identify clinically actionable targets have rapidly entered clinical practice. Effective visual presentation of the results of these tests is crucial in accurate clinical decision-making. In current practice, these results are typically delivered to oncologists as static prints, who then incorporate them into their clinical decision-making process. However, due to a lack of guidelines for standardization, different vendors use different report formats. There is very little known on the effectiveness of these report formats or the criteria necessary to improve them. In this study, we have aimed to identify both the tasks and the needs of oncologists from precision oncology report design and then to improve the designs based on these findings. To this end, we report results from multiple interviews and a survey study (n=32) conducted with practicing oncologists. Based on these results, we compiled a set of design criteria for precision oncology reports and developed a prototype report design using these criteria, along with feedback from oncologists.
[ { "created": "Thu, 4 Oct 2018 18:49:11 GMT", "version": "v1" } ]
2018-10-19
[ [ "Kalaycı", "Selim", "" ], [ "Demiralp", "Çağatay", "" ], [ "Gümüş", "Zeynep H.", "" ] ]
Precision oncology tests that profile tumors to identify clinically actionable targets have rapidly entered clinical practice. Effective visual presentation of the results of these tests is crucial in accurate clinical decision-making. In current practice, these results are typically delivered to oncologists as static prints, who then incorporate them into their clinical decision-making process. However, due to a lack of guidelines for standardization, different vendors use different report formats. There is very little known on the effectiveness of these report formats or the criteria necessary to improve them. In this study, we have aimed to identify both the tasks and the needs of oncologists from precision oncology report design and then to improve the designs based on these findings. To this end, we report results from multiple interviews and a survey study (n=32) conducted with practicing oncologists. Based on these results, we compiled a set of design criteria for precision oncology reports and developed a prototype report design using these criteria, along with feedback from oncologists.
2304.00687
Jeffrey Lu
Jeffrey Lu
Computational Validation of a Mathematical Model of Stable Multi-Species Communities in a Hawk Dove Game
null
null
null
null
q-bio.PE cs.GT
http://creativecommons.org/licenses/by-nc-sa/4.0/
We revisit the original hawk-dove game with slight modifications to payoff values while maintaining the fundamental principles of interaction. The practical robustness of the theoretical tools of game theory is tested on a simulated population of hawks and doves with varying initial population distributions and peak growth rates. Additionally, we aim to find conditions in which the entire community fails or becomes a single-species population. The results show that the predicted community distribution is established by the majority of communities but fails to exist in communities with extreme initial imbalances in species distribution and insufficient growth rates. We also find that greater growth rates can compensate for more imbalanced initial conditions and that more balanced initial conditions can compensate for lower growth rates. Overall, the simple theoretical model is a strong predictor of the stable behavior of simulated multi-species communities.
[ { "created": "Mon, 3 Apr 2023 02:35:29 GMT", "version": "v1" } ]
2023-04-04
[ [ "Lu", "Jeffrey", "" ] ]
We revisit the original hawk-dove game with slight modifications to payoff values while maintaining the fundamental principles of interaction. The practical robustness of the theoretical tools of game theory is tested on a simulated population of hawks and doves with varying initial population distributions and peak growth rates. Additionally, we aim to find conditions in which the entire community fails or becomes a single-species population. The results show that the predicted community distribution is established by the majority of communities but fails to exist in communities with extreme initial imbalances in species distribution and insufficient growth rates. We also find that greater growth rates can compensate for more imbalanced initial conditions and that more balanced initial conditions can compensate for lower growth rates. Overall, the simple theoretical model is a strong predictor of the stable behavior of simulated multi-species communities.
1410.2610
Nima Dehghani
Nima Dehghani, Adrien Peyrache, Bartosz Telenczuk, Michel Le Van Quyen, Eric Halgren, Sydney S. Cash, Nicholas G. Hatsopoulos, Alain Destexhe
Dynamic Balance of Excitation and Inhibition in Human and Monkey Neocortex
Sci. Rep. 6, 23176
null
10.1038/srep23176
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Balance of excitation and inhibition is a fundamental feature of in vivo network activity and is important for its computations. However, its presence in the neocortex of higher mammals is not well established. We investigated the dynamics of excitation and inhibition using dense multielectrode recordings in humans and monkeys. We found that in all states of the wake-sleep cycle, excitatory and inhibitory ensembles are well balanced, and co-fluctuate with slight instantaneous deviations from perfect balance, mostly in slow-wave sleep. Remarkably, these correlated fluctuations are seen for many different temporal scales. The similarity of these computational features with a network model of self-generated balanced states suggests that such balanced activity is essentially generated by recurrent activity in the local network and is not due to external inputs. Finally, we find that this balance breaks down during seizures, where the temporal correlation of excitatory and inhibitory populations is disrupted. These results show that balanced activity is a feature of normal brain activity, and break down of the balance could be an important factor to define pathological states.
[ { "created": "Thu, 9 Oct 2014 20:25:24 GMT", "version": "v1" }, { "created": "Thu, 30 Oct 2014 18:42:12 GMT", "version": "v2" }, { "created": "Tue, 28 Apr 2015 22:46:10 GMT", "version": "v3" }, { "created": "Fri, 4 Mar 2016 15:24:23 GMT", "version": "v4" }, { "cre...
2016-03-15
[ [ "Dehghani", "Nima", "" ], [ "Peyrache", "Adrien", "" ], [ "Telenczuk", "Bartosz", "" ], [ "Van Quyen", "Michel Le", "" ], [ "Halgren", "Eric", "" ], [ "Cash", "Sydney S.", "" ], [ "Hatsopoulos", "Nicholas G.", ...
Balance of excitation and inhibition is a fundamental feature of in vivo network activity and is important for its computations. However, its presence in the neocortex of higher mammals is not well established. We investigated the dynamics of excitation and inhibition using dense multielectrode recordings in humans and monkeys. We found that in all states of the wake-sleep cycle, excitatory and inhibitory ensembles are well balanced, and co-fluctuate with slight instantaneous deviations from perfect balance, mostly in slow-wave sleep. Remarkably, these correlated fluctuations are seen for many different temporal scales. The similarity of these computational features with a network model of self-generated balanced states suggests that such balanced activity is essentially generated by recurrent activity in the local network and is not due to external inputs. Finally, we find that this balance breaks down during seizures, where the temporal correlation of excitatory and inhibitory populations is disrupted. These results show that balanced activity is a feature of normal brain activity, and break down of the balance could be an important factor to define pathological states.
1012.3641
Chris Adami
Jifeng Qian, Arend Hintze, and Christoph Adami
Colored motifs reveal computational building blocks in the C. elegans brain
16 pages, 6 figures, two supplementary figures, 1 table. Extended Supplementary Figure S2 available upon request. To appear in PLoS ONE
PLoS ONE 6 (2011) e17013
10.1371/journal.pone.0017013
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Complex networks can often be decomposed into less complex sub-networks whose structures can give hints about the functional organization of the network as a whole. However, these structural motifs can only tell one part of the functional story because in this analysis each node and edge is treated on an equal footing. In real networks, two motifs that are topologically identical but whose nodes perform very different functions will play very different roles in the network. Here, we combine structural information derived from the topology of the neuronal network of the nematode C. elegans with information about the biological function of these nodes, thus coloring nodes by function. We discover that particular colorations of motifs are significantly more abundant in the worm brain than expected by chance, and have particular computational functions that emphasize the feed-forward structure of information processing in the network, while evading feedback loops. Interneurons are strongly over-represented among the common motifs, supporting the notion that these motifs process and transduce the information from the sensor neurons towards the muscles. Some of the most common motifs identified in the search for significant colored motifs play a crucial role in the system of neurons controlling the worm's locomotion. The analysis of complex networks in terms of colored motifs combines two independent data sets to generate insight about these networks that cannot be obtained with either data set alone. The method is general and should allow a decomposition of any complex network into its functional (rather than topological) motifs as long as both wiring and functional information is available.
[ { "created": "Thu, 16 Dec 2010 15:35:56 GMT", "version": "v1" } ]
2015-05-20
[ [ "Qian", "Jifeng", "" ], [ "Hintze", "Arend", "" ], [ "Adami", "Christoph", "" ] ]
Complex networks can often be decomposed into less complex sub-networks whose structures can give hints about the functional organization of the network as a whole. However, these structural motifs can only tell one part of the functional story because in this analysis each node and edge is treated on an equal footing. In real networks, two motifs that are topologically identical but whose nodes perform very different functions will play very different roles in the network. Here, we combine structural information derived from the topology of the neuronal network of the nematode C. elegans with information about the biological function of these nodes, thus coloring nodes by function. We discover that particular colorations of motifs are significantly more abundant in the worm brain than expected by chance, and have particular computational functions that emphasize the feed-forward structure of information processing in the network, while evading feedback loops. Interneurons are strongly over-represented among the common motifs, supporting the notion that these motifs process and transduce the information from the sensor neurons towards the muscles. Some of the most common motifs identified in the search for significant colored motifs play a crucial role in the system of neurons controlling the worm's locomotion. The analysis of complex networks in terms of colored motifs combines two independent data sets to generate insight about these networks that cannot be obtained with either data set alone. The method is general and should allow a decomposition of any complex network into its functional (rather than topological) motifs as long as both wiring and functional information is available.
1911.06298
Kezia Irene
Kezia Irene, Aditya Yudha P., Harlan Haidi, Nurul Faza, Winston Chandra
Fetal Head and Abdomen Measurement Using Convolutional Neural Network, Hough Transform, and Difference of Gaussian Revolved along Elliptical Path (Dogell) Algorithm
5 pages, 9 figures
null
null
null
q-bio.QM cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
The number of fetal-neonatal death in Indonesia is still high compared to developed countries. This is caused by the absence of maternal monitoring during pregnancy. This paper presents an automated measurement for fetal head circumference (HC) and abdominal circumference (AC) from the ultrasonography (USG) image. This automated measurement is beneficial to detect early fetal abnormalities during the pregnancy period. We used the convolutional neural network (CNN) method, to preprocess the USG data. After that, we approximate the head and abdominal circumference using the Hough transform algorithm and the difference of Gaussian Revolved along Elliptical Path (Dogell) Algorithm. We used the data set from national hospitals in Indonesia and for the accuracy measurement, we compared our results to the annotated images measured by professional obstetricians. The result shows that by using CNN, we reduced errors caused by a noisy image. We found that the Dogell algorithm performs better than the Hough transform algorithm in both time and accuracy. This is the first HC and AC approximation that used the CNN method to preprocess the data.
[ { "created": "Thu, 14 Nov 2019 18:34:38 GMT", "version": "v1" } ]
2019-11-15
[ [ "Irene", "Kezia", "" ], [ "P.", "Aditya Yudha", "" ], [ "Haidi", "Harlan", "" ], [ "Faza", "Nurul", "" ], [ "Chandra", "Winston", "" ] ]
The number of fetal-neonatal death in Indonesia is still high compared to developed countries. This is caused by the absence of maternal monitoring during pregnancy. This paper presents an automated measurement for fetal head circumference (HC) and abdominal circumference (AC) from the ultrasonography (USG) image. This automated measurement is beneficial to detect early fetal abnormalities during the pregnancy period. We used the convolutional neural network (CNN) method, to preprocess the USG data. After that, we approximate the head and abdominal circumference using the Hough transform algorithm and the difference of Gaussian Revolved along Elliptical Path (Dogell) Algorithm. We used the data set from national hospitals in Indonesia and for the accuracy measurement, we compared our results to the annotated images measured by professional obstetricians. The result shows that by using CNN, we reduced errors caused by a noisy image. We found that the Dogell algorithm performs better than the Hough transform algorithm in both time and accuracy. This is the first HC and AC approximation that used the CNN method to preprocess the data.
1605.05686
Eugen Tarnow
Eugen Tarnow
Free Recall Shows Similar Reactivation Behavior as Recognition & Cued Recall
null
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
I find that the total retrieval time in word free recall increases linearly with the total number of items recalled. Measured slopes, the time to retrieve an additional item, vary from 1.4-4.5 seconds per item depending upon presentation rate, subject age and whether there is a delay after list presentation or not. These times to retrieve an additional item obey a second linear relationship as a function of the recall probability averaged over the experiment, explicitly independent of subject age, presentation rate and whether there is a delay after the list presentation or not. This second linear relationship mimics the relationships in recognition and cued recall (Tarnow, 2008) which suggests that free recall retrieval uses the same reactivation mechanism as recognition or cued recall. Extrapolation limits the time to retrieve an additional item to a maximum of 7.2 seconds per item if the probability of recall is near 0%. Earlier upper limits for recognition and cued recall varied with the item type between 0.2 and 1.8 seconds per item (Tarnow, 2008). Implications include that there are only two types of short term memory: working memory and reactivation.
[ { "created": "Thu, 7 Apr 2016 01:36:39 GMT", "version": "v1" } ]
2016-05-19
[ [ "Tarnow", "Eugen", "" ] ]
I find that the total retrieval time in word free recall increases linearly with the total number of items recalled. Measured slopes, the time to retrieve an additional item, vary from 1.4-4.5 seconds per item depending upon presentation rate, subject age and whether there is a delay after list presentation or not. These times to retrieve an additional item obey a second linear relationship as a function of the recall probability averaged over the experiment, explicitly independent of subject age, presentation rate and whether there is a delay after the list presentation or not. This second linear relationship mimics the relationships in recognition and cued recall (Tarnow, 2008) which suggests that free recall retrieval uses the same reactivation mechanism as recognition or cued recall. Extrapolation limits the time to retrieve an additional item to a maximum of 7.2 seconds per item if the probability of recall is near 0%. Earlier upper limits for recognition and cued recall varied with the item type between 0.2 and 1.8 seconds per item (Tarnow, 2008). Implications include that there are only two types of short term memory: working memory and reactivation.
1407.4390
Dmitrii Rachinskii
E. A. O'Grady, S. C. Culloty, T. C. Kelly, M. J. A. O'Callaghan, D. Rachinskii
A preliminary threshold model of parasitism in the Cockle\emph{Cerastoderma edule} using delayed exchange of stability
null
null
10.1088/1742-6596/585/1/012013
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Thresholds occur in the dynamics of many biological communities. Here we model a persistence type threshold which has been shown experimentally to exist in hyperparasitised flukes in the cockle, a shellfish. Our model consists of a periodically driven slow-fast host-parasite system of equations for a slow flukes population and a fast Unikaryon hyperparasite population. The model exhibits two branches of the critical curve crossing in a transcritical bifurcation scenario. We discuss two thresholds due to immediate and delayed exchange of stability effects; and we derive algebraic relationships for parameters of the periodic solution in the limit of the infinite ratio of the time scales. Flukes parasitise cockles and in turn are hyperparasitised by the microsporidian Unikaryon legeri; the life cycle of flukes includes several life stages and a number of different hosts. That is, the flukes-hyperparasite system in a cockle is part of a larger estuarine ecosystem of species involving parasites, shellfish and birds which prey on shellfish. A population dynamics model which accounts for such multi-species interactions and includes the fluke-hyperparasite model in a cockle as a subsystem is presented. We provide evidence that the threshold effect we observed in the flukes-hyperparasite subsystem remains apparent in the multi-species system. Assuming that flukes damage cockles, and taking into account that the hyperparasite is detrimental to flukes, it is natural to suggest that the hyperparasitism may support the abundance of cockles and, thereby, the persistence of the ecosystem, including shellfish and birds. We confirm the possibility of this scenario in our model by removing the hyperparasite and demonstrating that this may result in a substantial drop in cockle numbers. The result indicates a possible significant role for the microparasite in this estuarine ecosystem.
[ { "created": "Wed, 16 Jul 2014 17:36:00 GMT", "version": "v1" } ]
2015-06-22
[ [ "O'Grady", "E. A.", "" ], [ "Culloty", "S. C.", "" ], [ "Kelly", "T. C.", "" ], [ "O'Callaghan", "M. J. A.", "" ], [ "Rachinskii", "D.", "" ] ]
Thresholds occur in the dynamics of many biological communities. Here we model a persistence type threshold which has been shown experimentally to exist in hyperparasitised flukes in the cockle, a shellfish. Our model consists of a periodically driven slow-fast host-parasite system of equations for a slow flukes population and a fast Unikaryon hyperparasite population. The model exhibits two branches of the critical curve crossing in a transcritical bifurcation scenario. We discuss two thresholds due to immediate and delayed exchange of stability effects; and we derive algebraic relationships for parameters of the periodic solution in the limit of the infinite ratio of the time scales. Flukes parasitise cockles and in turn are hyperparasitised by the microsporidian Unikaryon legeri; the life cycle of flukes includes several life stages and a number of different hosts. That is, the flukes-hyperparasite system in a cockle is part of a larger estuarine ecosystem of species involving parasites, shellfish and birds which prey on shellfish. A population dynamics model which accounts for such multi-species interactions and includes the fluke-hyperparasite model in a cockle as a subsystem is presented. We provide evidence that the threshold effect we observed in the flukes-hyperparasite subsystem remains apparent in the multi-species system. Assuming that flukes damage cockles, and taking into account that the hyperparasite is detrimental to flukes, it is natural to suggest that the hyperparasitism may support the abundance of cockles and, thereby, the persistence of the ecosystem, including shellfish and birds. We confirm the possibility of this scenario in our model by removing the hyperparasite and demonstrating that this may result in a substantial drop in cockle numbers. The result indicates a possible significant role for the microparasite in this estuarine ecosystem.
1207.0485
Sebastian Schreiber
Sebastian J. Schreiber and Timothy P. Killingback
Spatial heterogeneity promotes coexistence of rock-paper-scissor metacommunities
31pages, 5 figures
Theoretical Population Biology 2013 Vol. 86 Pages 1-11
10.1016/j.tpb.2013.02.004
null
q-bio.PE math.DS nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rock-paper-scissor game -- which is characterized by three strategies R,P,S, satisfying the non-transitive relations S excludes P, P excludes R, and R excludes S -- serves as a simple prototype for studying more complex non-transitive systems. For well-mixed systems where interactions result in fitness reductions of the losers exceeding fitness gains of the winners, classical theory predicts that two strategies go extinct. The effects of spatial heterogeneity and dispersal rates on this outcome are analyzed using a general framework for evolutionary games in patchy landscapes. The analysis reveals that coexistence is determined by the rates at which dominant strategies invade a landscape occupied by the subordinate strategy (e.g. rock invades a landscape occupied by scissors) and the rates at which subordinate strategies get excluded in a landscape occupied by the dominant strategy (e.g. scissor gets excluded in a landscape occupied by rock). These invasion and exclusion rates correspond to eigenvalues of the linearized dynamics near single strategy equilibria. Coexistence occurs when the product of the invasion rates exceeds the product of the exclusion rates. Provided there is sufficient spatial variation in payoffs, the analysis identifies a critical dispersal rate $d^*$ required for regional persistence. For dispersal rates below $d^*$, the product of the invasion rates exceed the product of the exclusion rates and the rock-paper-scissor metacommunities persist regionally despite being extinction prone locally. For dispersal rates above $d^*$, the product of the exclusion rates exceed the product of the invasion rates and the strategies are extinction prone. These results highlight the delicate interplay between spatial heterogeneity and dispersal in mediating long-term outcomes for evolutionary games.
[ { "created": "Mon, 2 Jul 2012 19:44:38 GMT", "version": "v1" } ]
2015-12-16
[ [ "Schreiber", "Sebastian J.", "" ], [ "Killingback", "Timothy P.", "" ] ]
The rock-paper-scissor game -- which is characterized by three strategies R,P,S, satisfying the non-transitive relations S excludes P, P excludes R, and R excludes S -- serves as a simple prototype for studying more complex non-transitive systems. For well-mixed systems where interactions result in fitness reductions of the losers exceeding fitness gains of the winners, classical theory predicts that two strategies go extinct. The effects of spatial heterogeneity and dispersal rates on this outcome are analyzed using a general framework for evolutionary games in patchy landscapes. The analysis reveals that coexistence is determined by the rates at which dominant strategies invade a landscape occupied by the subordinate strategy (e.g. rock invades a landscape occupied by scissors) and the rates at which subordinate strategies get excluded in a landscape occupied by the dominant strategy (e.g. scissor gets excluded in a landscape occupied by rock). These invasion and exclusion rates correspond to eigenvalues of the linearized dynamics near single strategy equilibria. Coexistence occurs when the product of the invasion rates exceeds the product of the exclusion rates. Provided there is sufficient spatial variation in payoffs, the analysis identifies a critical dispersal rate $d^*$ required for regional persistence. For dispersal rates below $d^*$, the product of the invasion rates exceed the product of the exclusion rates and the rock-paper-scissor metacommunities persist regionally despite being extinction prone locally. For dispersal rates above $d^*$, the product of the exclusion rates exceed the product of the invasion rates and the strategies are extinction prone. These results highlight the delicate interplay between spatial heterogeneity and dispersal in mediating long-term outcomes for evolutionary games.
0709.0625
Kateryna Mishchenko
Kateryna Mishchenko, Sverker Holmgren and Lars Ronnegard
Efficient Implementation of the AI-REML Iteration for Variance Component QTL Analysis
14 pages, 2 figures
null
null
Research report MDH/IMA ISSN 1404-4978
q-bio.QM q-bio.OT
null
Regions in the genome that affect complex traits, quantitative trait loci (QTL), can be identified using statistical analysis of genetic and phenotypic data. When restricted maximum-likelihood (REML) models are used, the mapping procedure is normally computationally demanding. We develop a new efficient computational scheme for QTL mapping using variance component analysis and the AI-REML algorithm. The algorithm uses an exact or approximative low-rank representation of the identity-by-descent matrix, which combined with the Woodbury formula for matrix inversion results in that the computations in the AI-REML iteration body can be performed more efficiently. For cases where an exact low-rank representation of the IBD matrix is available a-priori, the improved AI-REML algorithm normally runs almost twice as fast compared to the standard version. When an exact low-rank representation is not available, a truncated spectral decomposition is used to determine a low-rank approximation. We show that also in this case, the computational efficiency of the AI-REML scheme can often be significantly improved.
[ { "created": "Wed, 5 Sep 2007 11:36:01 GMT", "version": "v1" }, { "created": "Mon, 11 Feb 2008 12:34:04 GMT", "version": "v2" } ]
2008-02-11
[ [ "Mishchenko", "Kateryna", "" ], [ "Holmgren", "Sverker", "" ], [ "Ronnegard", "Lars", "" ] ]
Regions in the genome that affect complex traits, quantitative trait loci (QTL), can be identified using statistical analysis of genetic and phenotypic data. When restricted maximum-likelihood (REML) models are used, the mapping procedure is normally computationally demanding. We develop a new efficient computational scheme for QTL mapping using variance component analysis and the AI-REML algorithm. The algorithm uses an exact or approximative low-rank representation of the identity-by-descent matrix, which combined with the Woodbury formula for matrix inversion results in that the computations in the AI-REML iteration body can be performed more efficiently. For cases where an exact low-rank representation of the IBD matrix is available a-priori, the improved AI-REML algorithm normally runs almost twice as fast compared to the standard version. When an exact low-rank representation is not available, a truncated spectral decomposition is used to determine a low-rank approximation. We show that also in this case, the computational efficiency of the AI-REML scheme can often be significantly improved.
2001.10437
Carole Aime
Nicolas Debons, Dounia Dems (LCMCP-MATBIO), Christophe H\'elary, Sylvain Le Grill, Lise Picaut, Flore Renaud (LGBC), Nicolas Delsuc (UPMC), Marie-Claire Schanne-Klein (LOB), Thibaud Coradin (MATBIO), Carole Aim\'e (LCMCP, PASTEUR)
Differentiation of neural-type cells on multi-scale ordered collagen-silica bionanocomposites
null
Biomaterials Science, Royal Society of Chemistry (RSC), 2020
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cells respond to biophysical and biochemical signals. We developed a composite filament from collagen and silica particles modified to interact with collagen and/or present a laminin epitope (IKVAV) crucial for cell-matrix adhesion and signal transduction. This combines scaffolding and signaling and shows that local tuning of collagen organization enhances cell differentiation.
[ { "created": "Tue, 28 Jan 2020 16:17:16 GMT", "version": "v1" } ]
2020-01-29
[ [ "Debons", "Nicolas", "", "LCMCP-MATBIO" ], [ "Dems", "Dounia", "", "LCMCP-MATBIO" ], [ "Hélary", "Christophe", "", "LGBC" ], [ "Grill", "Sylvain Le", "", "LGBC" ], [ "Picaut", "Lise", "", "LGBC" ], [ "Renaud", ...
Cells respond to biophysical and biochemical signals. We developed a composite filament from collagen and silica particles modified to interact with collagen and/or present a laminin epitope (IKVAV) crucial for cell-matrix adhesion and signal transduction. This combines scaffolding and signaling and shows that local tuning of collagen organization enhances cell differentiation.
2406.14062
Roben Delos Reyes
Roben Delos Reyes, Hugo Lyons Keenan, Cameron Zachreson
An agent-based model of behaviour change calibrated to reversal learning data
23 pages, 5 figures
null
null
null
q-bio.QM physics.bio-ph stat.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Behaviour change lies at the heart of many observable collective phenomena such as the transmission and control of infectious diseases, adoption of public health policies, and migration of animals to new habitats. Representing the process of individual behaviour change in computer simulations of these phenomena remains an open challenge. Often, computational models use phenomenological implementations with limited support from behavioural data. Without a strong connection to observable quantities, such models have limited utility for simulating observed and counterfactual scenarios of emergent phenomena because they cannot be validated or calibrated. Here, we present a simple stochastic individual-based model of reversal learning that captures fundamental properties of individual behaviour change, namely, the capacity to learn based on accumulated reward signals, and the transient persistence of learned behaviour after rewards are removed or altered. The model has only two parameters, and we use approximate Bayesian computation to demonstrate that they are fully identifiable from empirical reversal learning time series data. Finally, we demonstrate how the model can be extended to account for the increased complexity of behavioural dynamics over longer time scales involving fluctuating stimuli. This work is a step towards the development and evaluation of fully identifiable individual-level behaviour change models that can function as validated submodels for complex simulations of collective behaviour change.
[ { "created": "Thu, 20 Jun 2024 07:38:08 GMT", "version": "v1" } ]
2024-06-21
[ [ "Reyes", "Roben Delos", "" ], [ "Keenan", "Hugo Lyons", "" ], [ "Zachreson", "Cameron", "" ] ]
Behaviour change lies at the heart of many observable collective phenomena such as the transmission and control of infectious diseases, adoption of public health policies, and migration of animals to new habitats. Representing the process of individual behaviour change in computer simulations of these phenomena remains an open challenge. Often, computational models use phenomenological implementations with limited support from behavioural data. Without a strong connection to observable quantities, such models have limited utility for simulating observed and counterfactual scenarios of emergent phenomena because they cannot be validated or calibrated. Here, we present a simple stochastic individual-based model of reversal learning that captures fundamental properties of individual behaviour change, namely, the capacity to learn based on accumulated reward signals, and the transient persistence of learned behaviour after rewards are removed or altered. The model has only two parameters, and we use approximate Bayesian computation to demonstrate that they are fully identifiable from empirical reversal learning time series data. Finally, we demonstrate how the model can be extended to account for the increased complexity of behavioural dynamics over longer time scales involving fluctuating stimuli. This work is a step towards the development and evaluation of fully identifiable individual-level behaviour change models that can function as validated submodels for complex simulations of collective behaviour change.
1902.01478
Philip Novosad
Philip Novosad, Vladimir Fonov, D. Louis Collins (ADNI)
Accurate and robust segmentation of neuroanatomy in T1-weighted MRI by combining spatial priors with deep convolutional neural networks
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neuroanatomical segmentation in magnetic resonance imaging (MRI) of the brain is a prerequisite for volume, thickness and shape measurements. This work introduces a new highly accurate and versatile method based on 3D convolutional neural networks for the automatic segmentation of neuroanatomy in T1-weighted MRI. In combination with a deep 3D fully convolutional architecture, efficient linear registration-derived spatial priors are used to incorporate additional spatial context into the network. An aggressive data augmentation scheme using random elastic deformations is also used to regularize the networks, allowing for excellent performance even in cases where only limited labelled training data are available. Applied to hippocampus segmentation in an elderly population (mean Dice coefficient = 92.1%) and sub-cortical segmentation in a healthy adult population (mean Dice coefficient = 89.5%), we demonstrate new state-of-the-art accuracies and a high robustness to outliers with the same architecture. Further validation on a multi-structure segmentation task in a scan-rescan dataset demonstrates accuracy (mean Dice coefficient = 86.6%) similar to the scan-rescan reliability of expert manual segmentations (mean Dice coefficient = 86.9%), and improved reliability compared to both expert manual segmentations and automated segmentations using FIRST. Furthermore, our method maintains a highly competitive runtime performance (e.g. requiring only 10 seconds for left/right hippocampal segmentation in 1x1x1 MNI stereotaxic space), orders of magnitude faster than conventional multi-atlas segmentation methods.
[ { "created": "Mon, 4 Feb 2019 22:14:21 GMT", "version": "v1" }, { "created": "Wed, 6 Feb 2019 04:37:02 GMT", "version": "v2" } ]
2019-02-07
[ [ "Novosad", "Philip", "", "ADNI" ], [ "Fonov", "Vladimir", "", "ADNI" ], [ "Collins", "D. Louis", "", "ADNI" ] ]
Neuroanatomical segmentation in magnetic resonance imaging (MRI) of the brain is a prerequisite for volume, thickness and shape measurements. This work introduces a new highly accurate and versatile method based on 3D convolutional neural networks for the automatic segmentation of neuroanatomy in T1-weighted MRI. In combination with a deep 3D fully convolutional architecture, efficient linear registration-derived spatial priors are used to incorporate additional spatial context into the network. An aggressive data augmentation scheme using random elastic deformations is also used to regularize the networks, allowing for excellent performance even in cases where only limited labelled training data are available. Applied to hippocampus segmentation in an elderly population (mean Dice coefficient = 92.1%) and sub-cortical segmentation in a healthy adult population (mean Dice coefficient = 89.5%), we demonstrate new state-of-the-art accuracies and a high robustness to outliers with the same architecture. Further validation on a multi-structure segmentation task in a scan-rescan dataset demonstrates accuracy (mean Dice coefficient = 86.6%) similar to the scan-rescan reliability of expert manual segmentations (mean Dice coefficient = 86.9%), and improved reliability compared to both expert manual segmentations and automated segmentations using FIRST. Furthermore, our method maintains a highly competitive runtime performance (e.g. requiring only 10 seconds for left/right hippocampal segmentation in 1x1x1 MNI stereotaxic space), orders of magnitude faster than conventional multi-atlas segmentation methods.
1409.6493
Karsten Rippe
Fabian Erdel, Michael Baum and Karsten Rippe
The viscoelastic properties of chromatin and the nucleoplasm revealed by scale-dependent protein mobility
This is an author-created, un-copyedited version of an article accepted for publication in J. Phys. Condens. Matter
null
10.1088/0953-8984/27/6/064115
null
q-bio.CB cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The eukaryotic cell nucleus harbors the DNA genome that is organized in a dynamic chromatin network and embedded in a viscous crowded fluid. This environment directly affects enzymatic reactions and target search processes that access the DNA sequence information. However, its physical properties as a reaction medium are poorly understood. Here, we exploit mobility measurements of differently sized inert green fluorescent tracer proteins to characterize the viscoelastic properties of the nuclear interior of a living human cell. We find that it resembles a viscous fluid on small and large scales, but appears viscoelastic on intermediate scales that change with protein size. Our results are consistent with simulations of diffusion through polymers and suggest that chromatin forms a random obstacle network rather than a self-similar structure with fixed fractal dimension. By calculating how long molecules remember their previous position in dependence on their size, we evaluate how the nuclear environment affects search processes of chromatin targets.
[ { "created": "Tue, 23 Sep 2014 11:27:40 GMT", "version": "v1" } ]
2015-06-23
[ [ "Erdel", "Fabian", "" ], [ "Baum", "Michael", "" ], [ "Rippe", "Karsten", "" ] ]
The eukaryotic cell nucleus harbors the DNA genome that is organized in a dynamic chromatin network and embedded in a viscous crowded fluid. This environment directly affects enzymatic reactions and target search processes that access the DNA sequence information. However, its physical properties as a reaction medium are poorly understood. Here, we exploit mobility measurements of differently sized inert green fluorescent tracer proteins to characterize the viscoelastic properties of the nuclear interior of a living human cell. We find that it resembles a viscous fluid on small and large scales, but appears viscoelastic on intermediate scales that change with protein size. Our results are consistent with simulations of diffusion through polymers and suggest that chromatin forms a random obstacle network rather than a self-similar structure with fixed fractal dimension. By calculating how long molecules remember their previous position in dependence on their size, we evaluate how the nuclear environment affects search processes of chromatin targets.
2307.02851
Samir Suweis Dr.
Samir Suweis, Francesco Ferraro, Christian Grilletta, Sandro Azaele, Amos Maritan
Generalized Lotka-Volterra Systems with Time Correlated Stochastic Interactions
4 Figures
null
null
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we explore the dynamics of species abundances within ecological communities using the Generalized Lotka-Volterra (GLV) model. At variance with previous approaches, we present an analysis of stochastic GLV dynamics with temporal fluctuations in interaction strengths between species. We develop a dynamical mean field theory (DMFT) tailored for scenarios with annealed colored noise and simple functional responses. We show that time-dependent interactions can be effectively modeled as environmental noise in the DMFT and we obtain analytical predictions for the species abundance distribution that well matches empirical observations. Our results suggest that environmental noise favors species coexistence and allows to overcome the complexity-stability paradox, especially in comparison to dynamics with quenched disorder. This study offers new insights not only into the modeling of large ecosystem dynamics, but also proposes novel methodologies for examining ecological systems.
[ { "created": "Thu, 6 Jul 2023 08:32:53 GMT", "version": "v1" }, { "created": "Tue, 23 Apr 2024 13:49:06 GMT", "version": "v2" } ]
2024-04-24
[ [ "Suweis", "Samir", "" ], [ "Ferraro", "Francesco", "" ], [ "Grilletta", "Christian", "" ], [ "Azaele", "Sandro", "" ], [ "Maritan", "Amos", "" ] ]
In this work, we explore the dynamics of species abundances within ecological communities using the Generalized Lotka-Volterra (GLV) model. At variance with previous approaches, we present an analysis of stochastic GLV dynamics with temporal fluctuations in interaction strengths between species. We develop a dynamical mean field theory (DMFT) tailored for scenarios with annealed colored noise and simple functional responses. We show that time-dependent interactions can be effectively modeled as environmental noise in the DMFT and we obtain analytical predictions for the species abundance distribution that well matches empirical observations. Our results suggest that environmental noise favors species coexistence and allows to overcome the complexity-stability paradox, especially in comparison to dynamics with quenched disorder. This study offers new insights not only into the modeling of large ecosystem dynamics, but also proposes novel methodologies for examining ecological systems.
1804.02660
Dario Alejandro Leon
Dario A. Leon and Augusto Gonzalez
Modeling evolution in a Long Time Evolution Experiment with E. Coli
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Taking into account an evolutionary model of mutations in term of Levy Fights that was previously constructed, we designed an algorithm to reproduce the evolutionary dynamics of the Long-Term Evolution Experiment (LTEE) with E. Coli bacteria. The algorithm enables us to simulate mutations under natural selection conditions. The results of simulations on competition of clones, mean fitness, etc., are compared with experimental data. We attained to reproduce the behavior of the mean fitness of the bacteria cultures, get our own interpretations and more tuned descriptions of some phenomena taking part within the experiment, such as fixation and drift processes, clonal interference and epistasis.
[ { "created": "Sun, 8 Apr 2018 09:23:42 GMT", "version": "v1" } ]
2018-04-10
[ [ "Leon", "Dario A.", "" ], [ "Gonzalez", "Augusto", "" ] ]
Taking into account an evolutionary model of mutations in term of Levy Fights that was previously constructed, we designed an algorithm to reproduce the evolutionary dynamics of the Long-Term Evolution Experiment (LTEE) with E. Coli bacteria. The algorithm enables us to simulate mutations under natural selection conditions. The results of simulations on competition of clones, mean fitness, etc., are compared with experimental data. We attained to reproduce the behavior of the mean fitness of the bacteria cultures, get our own interpretations and more tuned descriptions of some phenomena taking part within the experiment, such as fixation and drift processes, clonal interference and epistasis.
2001.00192
James Tee
James Tee and Desmond P. Taylor
A Quantized Representation of Probability in the Brain
12 pages, 23 figures, 6 tables. arXiv admin note: substantial text overlap with arXiv:1805.01631
IEEE Transactions on Molecular, Biological and Multi-Scale Communications (30 October 2019)
10.1109/TMBMC.2019.2950182
null
q-bio.NC cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conventional and current wisdom assumes that the brain represents probability as a continuous number to many decimal places. This assumption seems implausible given finite and scarce resources in the brain. Quantization is an information encoding process whereby a continuous quantity is systematically divided into a finite number of possible categories. Rounding is a simple example of quantization. We apply this information theoretic concept to develop a novel quantized (i.e., discrete) probability distortion function. We develop three conjunction probability gambling tasks to look for evidence of quantized probability representations in the brain. We hypothesize that certain ranges of probability will be lumped together in the same indifferent category if a quantized representation exists. For example, two distinct probabilities such as 0.57 and 0.585 may be treated indifferently. Our extensive data analysis has found strong evidence to support such a quantized representation: 59/76 participants (i.e., 78%) demonstrated a best fit to 4-bit quantized models instead of continuous models. This observation is the major development and novelty of the present work. The brain is very likely to be employing a quantized representation of probability. This discovery demonstrates a major precision limitation of the brain's representational and decision-making ability.
[ { "created": "Wed, 1 Jan 2020 11:45:26 GMT", "version": "v1" } ]
2020-01-07
[ [ "Tee", "James", "" ], [ "Taylor", "Desmond P.", "" ] ]
Conventional and current wisdom assumes that the brain represents probability as a continuous number to many decimal places. This assumption seems implausible given finite and scarce resources in the brain. Quantization is an information encoding process whereby a continuous quantity is systematically divided into a finite number of possible categories. Rounding is a simple example of quantization. We apply this information theoretic concept to develop a novel quantized (i.e., discrete) probability distortion function. We develop three conjunction probability gambling tasks to look for evidence of quantized probability representations in the brain. We hypothesize that certain ranges of probability will be lumped together in the same indifferent category if a quantized representation exists. For example, two distinct probabilities such as 0.57 and 0.585 may be treated indifferently. Our extensive data analysis has found strong evidence to support such a quantized representation: 59/76 participants (i.e., 78%) demonstrated a best fit to 4-bit quantized models instead of continuous models. This observation is the major development and novelty of the present work. The brain is very likely to be employing a quantized representation of probability. This discovery demonstrates a major precision limitation of the brain's representational and decision-making ability.
1403.3654
Ehtibar Dzhafarov
Andrei Khrennikov, Irina Basieva, Ehtibar N. Dzhafarov, Jerome R. Busemeyer
Quantum Models for Psychological Measurements: An Unsolved Problem
25 pages; submitted for publication
PLoS ONE 9(10): e110909 (2014)
10.1371/journal.pone.0110909
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There has been a strong recent interest in applying quantum mechanics (QM) outside physics, including in cognitive science. We analyze the applicability of QM to two basic properties in opinion polling. The first property (response replicability) is that, for a large class of questions, a response to a given question is expected to be repeated if the question is posed again, irrespective of whether another question is asked and answered in between. The second property (question order effect) is that the response probabilities frequently depend on the order in which the questions are asked. Whenever these two properties occur together, it poses a problem for QM. The conventional QM with Hermitian operators can handle response replicability, but only in the way incompatible with the question order effect. In the generalization of QM known as theory of positive-operator-valued measures (POVMs), in order to account for response replicability, the POVMs involved must be conventional operators. Although these problems are not unique to QM and also challenge conventional cognitive theories, they stand out as important unresolved problems for the application of QM to cognition . Either some new principles are needed to determine the bounds of applicability of QM to cognition, or quantum formalisms more general than POVMs are needed.
[ { "created": "Wed, 12 Mar 2014 22:08:56 GMT", "version": "v1" } ]
2017-02-08
[ [ "Khrennikov", "Andrei", "" ], [ "Basieva", "Irina", "" ], [ "Dzhafarov", "Ehtibar N.", "" ], [ "Busemeyer", "Jerome R.", "" ] ]
There has been a strong recent interest in applying quantum mechanics (QM) outside physics, including in cognitive science. We analyze the applicability of QM to two basic properties in opinion polling. The first property (response replicability) is that, for a large class of questions, a response to a given question is expected to be repeated if the question is posed again, irrespective of whether another question is asked and answered in between. The second property (question order effect) is that the response probabilities frequently depend on the order in which the questions are asked. Whenever these two properties occur together, it poses a problem for QM. The conventional QM with Hermitian operators can handle response replicability, but only in the way incompatible with the question order effect. In the generalization of QM known as theory of positive-operator-valued measures (POVMs), in order to account for response replicability, the POVMs involved must be conventional operators. Although these problems are not unique to QM and also challenge conventional cognitive theories, they stand out as important unresolved problems for the application of QM to cognition . Either some new principles are needed to determine the bounds of applicability of QM to cognition, or quantum formalisms more general than POVMs are needed.
1907.08549
Niru Maheswaranathan
Niru Maheswaranathan, Alex H. Williams, Matthew D. Golub, Surya Ganguli, David Sussillo
Universality and individuality in neural dynamics across large populations of recurrent networks
Presented at NeurIPS 2019
null
null
null
q-bio.NC cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Task-based modeling with recurrent neural networks (RNNs) has emerged as a popular way to infer the computational function of different brain regions. These models are quantitatively assessed by comparing the low-dimensional neural representations of the model with the brain, for example using canonical correlation analysis (CCA). However, the nature of the detailed neurobiological inferences one can draw from such efforts remains elusive. For example, to what extent does training neural networks to solve common tasks uniquely determine the network dynamics, independent of modeling architectural choices? Or alternatively, are the learned dynamics highly sensitive to different model choices? Knowing the answer to these questions has strong implications for whether and how we should use task-based RNN modeling to understand brain dynamics. To address these foundational questions, we study populations of thousands of networks, with commonly used RNN architectures, trained to solve neuroscientifically motivated tasks and characterize their nonlinear dynamics. We find the geometry of the RNN representations can be highly sensitive to different network architectures, yielding a cautionary tale for measures of similarity that rely representational geometry, such as CCA. Moreover, we find that while the geometry of neural dynamics can vary greatly across architectures, the underlying computational scaffold---the topological structure of fixed points, transitions between them, limit cycles, and linearized dynamics---often appears universal across all architectures.
[ { "created": "Fri, 19 Jul 2019 15:35:38 GMT", "version": "v1" }, { "created": "Wed, 4 Dec 2019 20:43:41 GMT", "version": "v2" } ]
2019-12-06
[ [ "Maheswaranathan", "Niru", "" ], [ "Williams", "Alex H.", "" ], [ "Golub", "Matthew D.", "" ], [ "Ganguli", "Surya", "" ], [ "Sussillo", "David", "" ] ]
Task-based modeling with recurrent neural networks (RNNs) has emerged as a popular way to infer the computational function of different brain regions. These models are quantitatively assessed by comparing the low-dimensional neural representations of the model with the brain, for example using canonical correlation analysis (CCA). However, the nature of the detailed neurobiological inferences one can draw from such efforts remains elusive. For example, to what extent does training neural networks to solve common tasks uniquely determine the network dynamics, independent of modeling architectural choices? Or alternatively, are the learned dynamics highly sensitive to different model choices? Knowing the answer to these questions has strong implications for whether and how we should use task-based RNN modeling to understand brain dynamics. To address these foundational questions, we study populations of thousands of networks, with commonly used RNN architectures, trained to solve neuroscientifically motivated tasks and characterize their nonlinear dynamics. We find the geometry of the RNN representations can be highly sensitive to different network architectures, yielding a cautionary tale for measures of similarity that rely representational geometry, such as CCA. Moreover, we find that while the geometry of neural dynamics can vary greatly across architectures, the underlying computational scaffold---the topological structure of fixed points, transitions between them, limit cycles, and linearized dynamics---often appears universal across all architectures.
2403.07297
Yuzhen Zhang
Yuzhen Zhang, Zili Gao, and Lili He
Optical detection of bacterial cells on stainless-steel surface with a low-magnification light microscope
38 pages, 13 figures, 1 table
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A Rapid and cost-effective method for detecting bacterial cells on surfaces is critical to protect public health from various aspects, including food safety, clinical hygiene, and pharmacy quality. Herein, we first established an optical detection method based on a gold chip coating with 3-mercaptophenylboronic acid (3-MPBA) to capture bacterial cells, which allows for the detection and quantification of bacterial cells with a standard light microscope under low-magnification (10 fold) objective lens. Then, integrating the developed optical detection method with swab sampling to achieve to detect bacterial cells loading on stainless-steel surfaces. Using Salmonella enterica (SE1045) and Escherichia coli as model bacterial cells, we achieved a capture efficiency of up to 76.0 % for SE1045 cells and 81.1 % for E. coli cells at Log 3 CFU/mL upon the optimized conditions. Our assay showed good linear relationship between the concentrations of bacterial cells with the cell counting in images with the limit of detection (LOD) of Log 3 CFU/mL for both SE1045 and E. coli cells. A further increase in sensitivity in detecting E. coli cells was achieved through a heat treatment, enabling the LOD to be pushed as low as Log 2 CFU/mL. Furthermore, successful application was observed in assessing bacterial contamination on stainless-steel surface following integrating with swab collection, achieving a recovery rate of approximately 70 % suggests future prospects for evaluating the cleanliness of surfaces. The entire process was completed within around 2 hours, with a cost of merely 2 dollars per sample. Given a standard light microscope cost around 250 dollars, our developed method has shown great potential in practical industrial applications for bacterial contamination control on surfaces in low-resource settings.
[ { "created": "Tue, 12 Mar 2024 03:57:57 GMT", "version": "v1" } ]
2024-03-13
[ [ "Zhang", "Yuzhen", "" ], [ "Gao", "Zili", "" ], [ "He", "Lili", "" ] ]
A Rapid and cost-effective method for detecting bacterial cells on surfaces is critical to protect public health from various aspects, including food safety, clinical hygiene, and pharmacy quality. Herein, we first established an optical detection method based on a gold chip coating with 3-mercaptophenylboronic acid (3-MPBA) to capture bacterial cells, which allows for the detection and quantification of bacterial cells with a standard light microscope under low-magnification (10 fold) objective lens. Then, integrating the developed optical detection method with swab sampling to achieve to detect bacterial cells loading on stainless-steel surfaces. Using Salmonella enterica (SE1045) and Escherichia coli as model bacterial cells, we achieved a capture efficiency of up to 76.0 % for SE1045 cells and 81.1 % for E. coli cells at Log 3 CFU/mL upon the optimized conditions. Our assay showed good linear relationship between the concentrations of bacterial cells with the cell counting in images with the limit of detection (LOD) of Log 3 CFU/mL for both SE1045 and E. coli cells. A further increase in sensitivity in detecting E. coli cells was achieved through a heat treatment, enabling the LOD to be pushed as low as Log 2 CFU/mL. Furthermore, successful application was observed in assessing bacterial contamination on stainless-steel surface following integrating with swab collection, achieving a recovery rate of approximately 70 % suggests future prospects for evaluating the cleanliness of surfaces. The entire process was completed within around 2 hours, with a cost of merely 2 dollars per sample. Given a standard light microscope cost around 250 dollars, our developed method has shown great potential in practical industrial applications for bacterial contamination control on surfaces in low-resource settings.
q-bio/0606015
Giancarlo Rossi
S. Furlan, G. La Penna, F. Guerrieri, S. Morante and G.C. Rossi
Ab initio simulations of Cu binding sites in the N-terminal region of PrP
4 pages, conference proceeding
null
null
null
q-bio.BM
null
The prion protein (PrP) binds Cu2+ ions in the octarepeat domain of the N-terminal tail up to full occupancy at pH=7.4. Recent experiments show that the HGGG octarepeat subdomain is responsible for holding the metal bound in a square planar coordination. By using first principle ab initio molecular dynamics simulations of the Car-Parrinello type, the Cu coordination mode to the binding sites of the PrP octarepeat region is investigated. Simulations are carried out for a number of structured binding sites. Results for the complexes Cu(HGGGW)+(wat), Cu(HGGG) and the 2[Cu(HGGG)] dimer are presented. While the presence of a Trp residue and a H2O molecule does not seem to affect the nature of the Cu coordination, high stability of the bond between Cu and the amide Nitrogens of deprotonated Gly's is confirmed in the case of the Cu(HGGG) system. For the more interesting 2[Cu(HGGG)] dimer a dynamically entangled arrangement of the two monomers, with intertwined N-Cu bonds, emerges. This observation is consistent with the highly packed structure seen in experiments at full Cu occupancy.
[ { "created": "Tue, 13 Jun 2006 16:17:55 GMT", "version": "v1" } ]
2007-05-23
[ [ "Furlan", "S.", "" ], [ "La Penna", "G.", "" ], [ "Guerrieri", "F.", "" ], [ "Morante", "S.", "" ], [ "Rossi", "G. C.", "" ] ]
The prion protein (PrP) binds Cu2+ ions in the octarepeat domain of the N-terminal tail up to full occupancy at pH=7.4. Recent experiments show that the HGGG octarepeat subdomain is responsible for holding the metal bound in a square planar coordination. By using first principle ab initio molecular dynamics simulations of the Car-Parrinello type, the Cu coordination mode to the binding sites of the PrP octarepeat region is investigated. Simulations are carried out for a number of structured binding sites. Results for the complexes Cu(HGGGW)+(wat), Cu(HGGG) and the 2[Cu(HGGG)] dimer are presented. While the presence of a Trp residue and a H2O molecule does not seem to affect the nature of the Cu coordination, high stability of the bond between Cu and the amide Nitrogens of deprotonated Gly's is confirmed in the case of the Cu(HGGG) system. For the more interesting 2[Cu(HGGG)] dimer a dynamically entangled arrangement of the two monomers, with intertwined N-Cu bonds, emerges. This observation is consistent with the highly packed structure seen in experiments at full Cu occupancy.
2206.07999
David Bortz
Lewis R. Baker (1), Moshe T. Gordon (2), Brian P. Ziemba (3), Victoria Gershuny (4), Joseph J. Falke (3), David M. Bortz (1) ((1) Department of Applied Mathematics, University of Colorado, Boulder, CO 80309-0526, (2) Department of Physiology & Biophysics, University of Washington Physiology & Biophysics Dept., 1705 NE Pacific Street, HSB Room G424, Box 357290, Seattle, WA 98195-7290, (3) Department of Biochemistry, University of Colorado, Boulder, CO 80309-0596, (4) Division of Applied Regulatory Science, Office of Clinical Pharmacology, Office of Translational Sciences, Center for Drug Evaluation and Research, US Food and Drug Administration, Silver Spring, MD, USA)
Learning diffusion coefficients, kinetic parameters, and the number of underlying states from a multi-state diffusion process: robustness results and application to PDK1/PKC$\alpha$, dynamics
29 pages, 13 figures
null
null
null
q-bio.QM q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Systems driven by Brownian motion are ubiquitous. A prevailing challenge is inferring, from data, the diffusion and kinetic parameters that describe these stochastic processes. In this work, we investigate a multi-state diffusion process that arises in the context of single particle tracking (SPT), wherein the motion of a particle is governed by a discrete set of diffusive states, and the tendency of the particle to switch between these states is modeled as a random process. We consider two models for this behavior: a mixture model and a hidden Markov model (HMM). For both, we adopt a Bayesian approach to sample the distributions of the underlying parameters and implement a Markov Chain Monte Carlo (MCMC) scheme to compute the posterior distributions, as in Das, Cairo, Coombs (2009). The primary contribution of this work is a study of the robustness of this method to infer parameters of a three-state HMM, and a discussion of the challenges and degeneracies that arise from considering three states. Finally, we investigate the problem of determining the number of diffusive states using model selection criteria. We present results from simulated data that demonstrate proof of concept, as well as apply our method to experimentally measured single molecule diffusion trajectories of monomeric phosphoinositide-dependent kinase-1 (PDK1) on a synthetic target membrane where it can associate with its binding partner protein kinase C alpha isoform (PKC$\alpha$) to form a heterodimer detected by its significantly lower diffusivity. All matlab software is available here: \url{https://github.com/MathBioCU/SingleMolecule}
[ { "created": "Thu, 16 Jun 2022 08:38:53 GMT", "version": "v1" }, { "created": "Fri, 17 Jun 2022 06:13:05 GMT", "version": "v2" } ]
2022-06-20
[ [ "Baker", "Lewis R.", "" ], [ "Gordon", "Moshe T.", "" ], [ "Ziemba", "Brian P.", "" ], [ "Gershuny", "Victoria", "" ], [ "Falke", "Joseph J.", "" ], [ "Bortz", "David M.", "" ] ]
Systems driven by Brownian motion are ubiquitous. A prevailing challenge is inferring, from data, the diffusion and kinetic parameters that describe these stochastic processes. In this work, we investigate a multi-state diffusion process that arises in the context of single particle tracking (SPT), wherein the motion of a particle is governed by a discrete set of diffusive states, and the tendency of the particle to switch between these states is modeled as a random process. We consider two models for this behavior: a mixture model and a hidden Markov model (HMM). For both, we adopt a Bayesian approach to sample the distributions of the underlying parameters and implement a Markov Chain Monte Carlo (MCMC) scheme to compute the posterior distributions, as in Das, Cairo, Coombs (2009). The primary contribution of this work is a study of the robustness of this method to infer parameters of a three-state HMM, and a discussion of the challenges and degeneracies that arise from considering three states. Finally, we investigate the problem of determining the number of diffusive states using model selection criteria. We present results from simulated data that demonstrate proof of concept, as well as apply our method to experimentally measured single molecule diffusion trajectories of monomeric phosphoinositide-dependent kinase-1 (PDK1) on a synthetic target membrane where it can associate with its binding partner protein kinase C alpha isoform (PKC$\alpha$) to form a heterodimer detected by its significantly lower diffusivity. All matlab software is available here: \url{https://github.com/MathBioCU/SingleMolecule}
1811.07054
Tianyu Zhang
Tianyu Zhang, Liwei Zhang, Philip R.O. Payne, Fuhai Li
Synergistic Drug Combination Prediction by Integrating Multi-omics Data in Deep Learning Models
null
null
null
null
q-bio.GN cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Drug resistance is still a major challenge in cancer therapy. Drug combination is expected to overcome drug resistance. However, the number of possible drug combinations is enormous, and thus it is infeasible to experimentally screen all effective drug combinations considering the limited resources. Therefore, computational models to predict and prioritize effective drug combinations is important for combinatory therapy discovery in cancer. In this study, we proposed a novel deep learning model, AuDNNsynergy, to prediction drug combinations by integrating multi-omics data and chemical structure data. In specific, three autoencoders were trained using the gene expression, copy number and genetic mutation data of all tumor samples from The Cancer Genome Atlas. Then the physicochemical properties of drugs combined with the output of the three autoencoders, characterizing the individual cancer cell-lines, were used as the input of a deep neural network that predicts the synergy value of given pair-wise drug combinations against the specific cancer cell-lines. The comparison results showed the proposed AuDNNsynergy model outperforms four state-of-art approaches, namely DeepSynergy, Gradient Boosting Machines, Random Forests, and Elastic Nets. Moreover, we conducted the interpretation analysis of the deep learning model to investigate potential vital genetic predictors and the underlying mechanism of synergistic drug combinations on specific cancer cell-lines.
[ { "created": "Fri, 16 Nov 2018 22:40:06 GMT", "version": "v1" } ]
2018-11-20
[ [ "Zhang", "Tianyu", "" ], [ "Zhang", "Liwei", "" ], [ "Payne", "Philip R. O.", "" ], [ "Li", "Fuhai", "" ] ]
Drug resistance is still a major challenge in cancer therapy. Drug combination is expected to overcome drug resistance. However, the number of possible drug combinations is enormous, and thus it is infeasible to experimentally screen all effective drug combinations considering the limited resources. Therefore, computational models to predict and prioritize effective drug combinations is important for combinatory therapy discovery in cancer. In this study, we proposed a novel deep learning model, AuDNNsynergy, to prediction drug combinations by integrating multi-omics data and chemical structure data. In specific, three autoencoders were trained using the gene expression, copy number and genetic mutation data of all tumor samples from The Cancer Genome Atlas. Then the physicochemical properties of drugs combined with the output of the three autoencoders, characterizing the individual cancer cell-lines, were used as the input of a deep neural network that predicts the synergy value of given pair-wise drug combinations against the specific cancer cell-lines. The comparison results showed the proposed AuDNNsynergy model outperforms four state-of-art approaches, namely DeepSynergy, Gradient Boosting Machines, Random Forests, and Elastic Nets. Moreover, we conducted the interpretation analysis of the deep learning model to investigate potential vital genetic predictors and the underlying mechanism of synergistic drug combinations on specific cancer cell-lines.
0804.3217
Alexandre Mezentsev
A. Mezentsev
Epidermal corneocytes: dead guards of the hidden treasure
17 pages, 6 figures
null
null
null
q-bio.TO q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gradual transformation of the epidermal stem cells to corneocytes involves a chain of chronologically well-arranged events that mostly stimulated locally by their neighbors. Cell diversity that observed during the differentiation through the different epidermal cell layers included the consisted changes of cell shape, intercellular contacts and proliferation. However, the most dramatically these changes appeared at the molecular level through gene expression, catalysis and intraprotein interactions. The proposed review explains these changes by switching systemic transcription factors that unlike their counterparts those role is limited to a contribution to gene expression also prepare cells to the next step of differentiation via modification of the chromatin pattern . Since primary epidermal keratinocytes are one of the most easy available type of the stem cells, a better understanding of the epidermal differentiation will benefit the research in the other areas by a discovery of basic coordinating mechanisms that stand behind such distinct molecular events as cell signaling and gene expression, and formulate basic principles for a smart therapeutic correction of the metabolism.
[ { "created": "Sun, 20 Apr 2008 21:32:08 GMT", "version": "v1" } ]
2008-04-22
[ [ "Mezentsev", "A.", "" ] ]
Gradual transformation of the epidermal stem cells to corneocytes involves a chain of chronologically well-arranged events that mostly stimulated locally by their neighbors. Cell diversity that observed during the differentiation through the different epidermal cell layers included the consisted changes of cell shape, intercellular contacts and proliferation. However, the most dramatically these changes appeared at the molecular level through gene expression, catalysis and intraprotein interactions. The proposed review explains these changes by switching systemic transcription factors that unlike their counterparts those role is limited to a contribution to gene expression also prepare cells to the next step of differentiation via modification of the chromatin pattern . Since primary epidermal keratinocytes are one of the most easy available type of the stem cells, a better understanding of the epidermal differentiation will benefit the research in the other areas by a discovery of basic coordinating mechanisms that stand behind such distinct molecular events as cell signaling and gene expression, and formulate basic principles for a smart therapeutic correction of the metabolism.
2103.04034
Jean Faber
Jo\~ao Vitor da Silva Moreira, Karina A. Rodrigues, Daniel Jos\'e L.L. Pinheiro, Tha\'is C. Santos, Jo\~ao Luiz Vieira, Esper A. Cavalheiro, Jean Faber
Electromyography biofeedback system with visual and vibratory feedbacks designed for lower limb rehabilitation
19 pages, 5 figures
null
null
null
q-bio.NC q-bio.QM
http://creativecommons.org/licenses/by/4.0/
One of the main causes of long-term prosthetic abandonment is the lack of ownership over the prosthesis, caused mainly by the absence of sensory information regarding the lost limb. One strategy to overcome this problem is to provide alternative feedback mechanisms to convey information respective to the absent limb. To address this issue, we developed a Biofeedback system for the rehabilitation of transfemoral amputees, controlled via electromyographic activity from the leg muscles, that can provide real-time visual and/or vibratory feedback for the user. In this study, we tested this device with able-bodied individuals performing an adapted version of the clinical protocol. Our idea was to test the effectiveness of combining vibratory and visual feedbacks and how task difficulty affects overall performance. Our results show no negative interference combining both feedback modalities, and that performance peaked at the intermediate difficulty. These results provide powerful insights of what can be expected with the population of amputee people and will help in the final steps of protocol development. Our goal is to use this biofeedback system to engage another sensory modality in the process of spatial representation of a virtual leg, bypassing the lack of information associated with the disruption of afferent pathways following amputation.
[ { "created": "Sat, 6 Mar 2021 05:28:35 GMT", "version": "v1" } ]
2021-03-09
[ [ "Moreira", "João Vitor da Silva", "" ], [ "Rodrigues", "Karina A.", "" ], [ "Pinheiro", "Daniel José L. L.", "" ], [ "Santos", "Thaís C.", "" ], [ "Vieira", "João Luiz", "" ], [ "Cavalheiro", "Esper A.", "" ], [ "F...
One of the main causes of long-term prosthetic abandonment is the lack of ownership over the prosthesis, caused mainly by the absence of sensory information regarding the lost limb. One strategy to overcome this problem is to provide alternative feedback mechanisms to convey information respective to the absent limb. To address this issue, we developed a Biofeedback system for the rehabilitation of transfemoral amputees, controlled via electromyographic activity from the leg muscles, that can provide real-time visual and/or vibratory feedback for the user. In this study, we tested this device with able-bodied individuals performing an adapted version of the clinical protocol. Our idea was to test the effectiveness of combining vibratory and visual feedbacks and how task difficulty affects overall performance. Our results show no negative interference combining both feedback modalities, and that performance peaked at the intermediate difficulty. These results provide powerful insights of what can be expected with the population of amputee people and will help in the final steps of protocol development. Our goal is to use this biofeedback system to engage another sensory modality in the process of spatial representation of a virtual leg, bypassing the lack of information associated with the disruption of afferent pathways following amputation.
2310.06191
Karan Taneja
Karan Taneja, Xiaolong He, John Hodgson, Usha Sinha, Shantanu Sinha, J. S. Chen
Investigating the Correlation between Force Output, Strains, and Pressure for Active Skeletal Muscle Contractions
null
null
null
null
q-bio.TO physics.med-ph
http://creativecommons.org/licenses/by/4.0/
Experimental observations suggest that the force output of the skeletal muscle tissue can be correlated to the intra-muscular pressure generated by the muscle belly. However, pressure often proves difficult to measure through in-vivo tests. Simulations on the other hand, offer a tool to model muscle contractions and analyze the relationship between muscle force generation and deformations as well as pressure outputs, enabling us to gain insight into correlations among experimentally measurable quantities such as principal and volumetric strains, and the force output. In this work, a correlation study is performed using Pearson's and Spearman's correlation coefficients on the force output of the skeletal muscle, the principal and volumetric strains experienced by the muscle and the pressure developed within the muscle belly as the muscle tissue undergoes isometric contractions due to varying activation profiles. The study reveals strong correlations between force output and the strains at all locations of the belly, irrespective of the type of activation profile used. This observation enables estimation on the contribution of various muscle groups to the total force by the experimentally measurable principal and volumetric strains in the muscle belly. It is also observed that pressure does not correlate well with force output due to stress relaxation near the boundary of muscle belly.
[ { "created": "Mon, 9 Oct 2023 22:42:58 GMT", "version": "v1" } ]
2023-10-11
[ [ "Taneja", "Karan", "" ], [ "He", "Xiaolong", "" ], [ "Hodgson", "John", "" ], [ "Sinha", "Usha", "" ], [ "Sinha", "Shantanu", "" ], [ "Chen", "J. S.", "" ] ]
Experimental observations suggest that the force output of the skeletal muscle tissue can be correlated to the intra-muscular pressure generated by the muscle belly. However, pressure often proves difficult to measure through in-vivo tests. Simulations on the other hand, offer a tool to model muscle contractions and analyze the relationship between muscle force generation and deformations as well as pressure outputs, enabling us to gain insight into correlations among experimentally measurable quantities such as principal and volumetric strains, and the force output. In this work, a correlation study is performed using Pearson's and Spearman's correlation coefficients on the force output of the skeletal muscle, the principal and volumetric strains experienced by the muscle and the pressure developed within the muscle belly as the muscle tissue undergoes isometric contractions due to varying activation profiles. The study reveals strong correlations between force output and the strains at all locations of the belly, irrespective of the type of activation profile used. This observation enables estimation on the contribution of various muscle groups to the total force by the experimentally measurable principal and volumetric strains in the muscle belly. It is also observed that pressure does not correlate well with force output due to stress relaxation near the boundary of muscle belly.
1306.3077
Gianluca Martelloni
Alisa Santarlasci, Gianluca Martelloni, Filippo Frizzi, Giacomo Santini and Franco Bagnoli
Modeling Warfare in Social Animals: A "Chemical" Approach
A modified version of this manuscript has been published in PLOS ONE journal
PLoS ONE 2014 9(11): e111310
10.1371/journal.pone.0111310
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The aim of our study is to describe the dynamics of ant battles, with reference to laboratory experiments, by means of a chemical stochastic model. We focus on ants behavior as an interesting topic in order to predict the ecological evolution of invasive species and their spreading. In our work we want to describe the interactions between two groups of different ant species with different war strategies. Our model considers the single ant individuals and fighting groups in a way similar to atoms and molecules, respectively, considering that ant fighting groups remain stable for a relative long time. Starting from a system of differential non-linear equations (DE), derived from the chemical reactions, we obtain a mean field description of the system. The DE approach is valid when the number of individuals of each species is large in the considered unit, while we consider battles of at most 10 vs. 10 individuals, due to the difficulties in following the individual behavior in a large assembly. Therefore, we also adapt a Gillespie algorithm to reproduce the fluctuations around the mean field. The DE scheme is exploited to characterize the stochastic model. The set of parameters of chemical equations, obtained using a minimization algorithm, are used by the Gillespie algorithm to generate the stochastic trajectories. We then fit the stochastic paths with the DE, in order to analyze the variability of the parameters and their variance. Finally, we estimate the goodness of the applied methodology and we confirm that the stochastic approach must be considered for a correct description of the ant fighting dynamics. With respect to other war models, our chemical one considers all phases of the battle and not only casualties. Thus, we can count on more experimental data, but we also have more parameters to fit. In any case, our model allows a much more detailed description of the fights.
[ { "created": "Thu, 13 Jun 2013 11:13:30 GMT", "version": "v1" }, { "created": "Fri, 29 Nov 2013 12:49:02 GMT", "version": "v2" }, { "created": "Fri, 6 May 2016 12:44:57 GMT", "version": "v3" } ]
2016-05-09
[ [ "Santarlasci", "Alisa", "" ], [ "Martelloni", "Gianluca", "" ], [ "Frizzi", "Filippo", "" ], [ "Santini", "Giacomo", "" ], [ "Bagnoli", "Franco", "" ] ]
The aim of our study is to describe the dynamics of ant battles, with reference to laboratory experiments, by means of a chemical stochastic model. We focus on ants behavior as an interesting topic in order to predict the ecological evolution of invasive species and their spreading. In our work we want to describe the interactions between two groups of different ant species with different war strategies. Our model considers the single ant individuals and fighting groups in a way similar to atoms and molecules, respectively, considering that ant fighting groups remain stable for a relative long time. Starting from a system of differential non-linear equations (DE), derived from the chemical reactions, we obtain a mean field description of the system. The DE approach is valid when the number of individuals of each species is large in the considered unit, while we consider battles of at most 10 vs. 10 individuals, due to the difficulties in following the individual behavior in a large assembly. Therefore, we also adapt a Gillespie algorithm to reproduce the fluctuations around the mean field. The DE scheme is exploited to characterize the stochastic model. The set of parameters of chemical equations, obtained using a minimization algorithm, are used by the Gillespie algorithm to generate the stochastic trajectories. We then fit the stochastic paths with the DE, in order to analyze the variability of the parameters and their variance. Finally, we estimate the goodness of the applied methodology and we confirm that the stochastic approach must be considered for a correct description of the ant fighting dynamics. With respect to other war models, our chemical one considers all phases of the battle and not only casualties. Thus, we can count on more experimental data, but we also have more parameters to fit. In any case, our model allows a much more detailed description of the fights.
1911.07654
Alena Harley
Alena Harley
Deep Discriminative Fine-Tuning for Cancer Type Classification
4 pages, 1 figure, ML4H NeurIPS Workshop
null
null
null
q-bio.GN cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Determining the primary site of origin for metastatic tumors is one of the open problems in cancer care because the efficacy of treatment often depends on the cancer tissue of origin. Classification methods that can leverage tumor genomic data and predict the site of origin are therefore of great value. Because tumor DNA point mutation data is very sparse, only limited accuracy (64.5% for 12 tumor classes) was previously demonstrated by methods that rely on point mutations as features (1). Tumor classification accuracy can be greatly improved (to over 90% for 33 classes) by relying on gene expression data (2). However, this additional data is often not readily available in clinical setting, because point mutations are better profiled and targeted by clinical mutational profiling. Here we sought to develop an accurate deep transfer learning and fine-tuning method for tumor sub-type classification, where predicted class is indicative of the primary site of origin. Our method significantly outperforms the state-of-the-art for tumor classification using DNA point mutations, reducing the error by more than 30% at the same time discriminating over many more classes on The Cancer Genome Atlas (TCGA) dataset. Using our method, we achieve state-of-the-art tumor type classification accuracy of 78.3% for 29 tumor classes relying on DNA point mutations in the tumor only.
[ { "created": "Fri, 15 Nov 2019 07:30:17 GMT", "version": "v1" } ]
2019-11-19
[ [ "Harley", "Alena", "" ] ]
Determining the primary site of origin for metastatic tumors is one of the open problems in cancer care because the efficacy of treatment often depends on the cancer tissue of origin. Classification methods that can leverage tumor genomic data and predict the site of origin are therefore of great value. Because tumor DNA point mutation data is very sparse, only limited accuracy (64.5% for 12 tumor classes) was previously demonstrated by methods that rely on point mutations as features (1). Tumor classification accuracy can be greatly improved (to over 90% for 33 classes) by relying on gene expression data (2). However, this additional data is often not readily available in clinical setting, because point mutations are better profiled and targeted by clinical mutational profiling. Here we sought to develop an accurate deep transfer learning and fine-tuning method for tumor sub-type classification, where predicted class is indicative of the primary site of origin. Our method significantly outperforms the state-of-the-art for tumor classification using DNA point mutations, reducing the error by more than 30% at the same time discriminating over many more classes on The Cancer Genome Atlas (TCGA) dataset. Using our method, we achieve state-of-the-art tumor type classification accuracy of 78.3% for 29 tumor classes relying on DNA point mutations in the tumor only.
q-bio/0606012
Michael Prentiss
Michael C. Prentiss, Corey Hardin, Michael P. Eastwood, Chenghong Zong, Peter G. Wolynes
Protein Structure Prediction: The Next Generation
null
Journal of Chemical Theory and Computation, Volume 2 Issue 3 (May 09, 2006)
10.1021/ct0600058
null
q-bio.BM
null
Over the last 10-15 years a general understanding of the chemical reaction of protein folding has emerged from statistical mechanics. The lessons learned from protein folding kinetics based on energy landscape ideas have benefited protein structure prediction, in particular the development of coarse grained models. We survey results from blind structure prediction. We explore how second generation prediction energy functions can be developed by introducing information from an ensemble of previously simulated structures. This procedure relies on the assumption of a funnelled energy landscape keeping with the principle of minimal frustration. First generation simulated structures provide an improved input for associative memory energy functions in comparison to the experimental protein structures chosen on the basis of sequence alignment.
[ { "created": "Mon, 12 Jun 2006 13:37:23 GMT", "version": "v1" } ]
2007-05-23
[ [ "Prentiss", "Michael C.", "" ], [ "Hardin", "Corey", "" ], [ "Eastwood", "Michael P.", "" ], [ "Zong", "Chenghong", "" ], [ "Wolynes", "Peter G.", "" ] ]
Over the last 10-15 years a general understanding of the chemical reaction of protein folding has emerged from statistical mechanics. The lessons learned from protein folding kinetics based on energy landscape ideas have benefited protein structure prediction, in particular the development of coarse grained models. We survey results from blind structure prediction. We explore how second generation prediction energy functions can be developed by introducing information from an ensemble of previously simulated structures. This procedure relies on the assumption of a funnelled energy landscape keeping with the principle of minimal frustration. First generation simulated structures provide an improved input for associative memory energy functions in comparison to the experimental protein structures chosen on the basis of sequence alignment.
1002.1411
L. John Gagliardi Ph. D
L. John Gagliardi
Continuum Electrostatics in Cell Biology
16 pages, 1 figure
null
null
null
q-bio.CB q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent experiments revealing possible nanoscale electrostatic interactions in force generation at kinetochores for chromosome motions have prompted speculation regarding possible models for interactions between positively charged molecules in kinetochores and negative charge on C-termini near the plus ends of microtubules. A clear picture of how kinetochores establish and maintain a dynamic coupling to microtubules for force generation during the complex motions of mitosis remains elusive. The current paradigm of molecular cell biology requires that specific molecules, or molecular geometries, for force generation be identified. However, it is possible to account for mitotic motions within a classical electrostatics approach in terms of experimentally known cellular electric charge interacting over nanometer distances. These charges are modeled as bound surface and volume continuum charge distributions. Electrostatic consequences of intracellular pH changes during mitosis may provide a master clock for the events of mitosis.
[ { "created": "Sat, 6 Feb 2010 20:30:18 GMT", "version": "v1" } ]
2010-02-09
[ [ "Gagliardi", "L. John", "" ] ]
Recent experiments revealing possible nanoscale electrostatic interactions in force generation at kinetochores for chromosome motions have prompted speculation regarding possible models for interactions between positively charged molecules in kinetochores and negative charge on C-termini near the plus ends of microtubules. A clear picture of how kinetochores establish and maintain a dynamic coupling to microtubules for force generation during the complex motions of mitosis remains elusive. The current paradigm of molecular cell biology requires that specific molecules, or molecular geometries, for force generation be identified. However, it is possible to account for mitotic motions within a classical electrostatics approach in terms of experimentally known cellular electric charge interacting over nanometer distances. These charges are modeled as bound surface and volume continuum charge distributions. Electrostatic consequences of intracellular pH changes during mitosis may provide a master clock for the events of mitosis.
q-bio/0311023
Rhoda Hawkins
Rhoda J. Hawkins and Thomas C. B. McLeish
Coarse-Grained Model of Entropic Allostery
4 pages 4 figures
null
10.1103/PhysRevLett.93.098104
null
q-bio.BM
null
Many signalling functions in molecular biology require proteins bind to substrates such as DNA in response to environmental signals such as the simultaneous binding to a small molecule. Examples are repressor proteins which may transmit information via a conformational change in response to the ligand binding. An alternative entropic mechanism of ``allostery'' suggests that the inducer ligand changes the intramolecular vibrational entropy not just the static structure. We present a quantitative, coarse-grained model of entropic allostery that suggests design rules for internal cohesive potentials in proteins employing this effect. It also addresses the issue of how the signal information to bind or unbind is transmitted through the protein. The model may be applicable to a wide range of repressors and also to signalling in transmembrane proteins.
[ { "created": "Tue, 18 Nov 2003 12:22:01 GMT", "version": "v1" }, { "created": "Fri, 30 Jan 2004 12:12:51 GMT", "version": "v2" } ]
2009-11-10
[ [ "Hawkins", "Rhoda J.", "" ], [ "McLeish", "Thomas C. B.", "" ] ]
Many signalling functions in molecular biology require proteins bind to substrates such as DNA in response to environmental signals such as the simultaneous binding to a small molecule. Examples are repressor proteins which may transmit information via a conformational change in response to the ligand binding. An alternative entropic mechanism of ``allostery'' suggests that the inducer ligand changes the intramolecular vibrational entropy not just the static structure. We present a quantitative, coarse-grained model of entropic allostery that suggests design rules for internal cohesive potentials in proteins employing this effect. It also addresses the issue of how the signal information to bind or unbind is transmitted through the protein. The model may be applicable to a wide range of repressors and also to signalling in transmembrane proteins.
2006.15414
Saman Hosseini Ashtiani
Reyhaneh Naderi, Homa Saadati Mollaei, Arne Elofsson and Saman Hosseini Ashtiani
Using micro- and macro-level network metrics unveils top communicative gene modules in psoriasis
null
null
10.3390/genes11080914
null
q-bio.GN
http://creativecommons.org/licenses/by-nc-sa/4.0/
Background: Psoriasis is a multifactorial chronic inflammatory disorder of the skin with significant morbidity, characterized by hyper proliferation of the epidermis. Even though psoriasis etiology is not fully understood, it is believed to be multifactorial with numerous key components. Methods: In order to cast light on the complex molecular interactions in psoriasis vulgaris at both protein-protein interactions and transcriptomics levels, we analyzed a set of microarray gene expression analysis consisting of 170 paired lesional and non-lesional samples. Afterwards, a network analysis was conducted on protein-protein interaction network of differentially expressed genes based on micro- and macro-level network metrics at a systemic level standpoint. Results: We found 17 top communicative genes, all of which experimentally proven to be pivotal in psoriasis were identified in two modules, namely, cell cycle and immune system. Intra- and inter-gene interaction subnetworks from the top communicative genes might provide further insight into the corresponding characteristic mechanisms. Conclusions: Potential gene combinations for therapeutic/diagnostics purposes were identified. Moreover, our proposed pipeline could be of interest to a broader range of biological network analysis studies.
[ { "created": "Sat, 27 Jun 2020 17:53:37 GMT", "version": "v1" } ]
2021-09-21
[ [ "Naderi", "Reyhaneh", "" ], [ "Mollaei", "Homa Saadati", "" ], [ "Elofsson", "Arne", "" ], [ "Ashtiani", "Saman Hosseini", "" ] ]
Background: Psoriasis is a multifactorial chronic inflammatory disorder of the skin with significant morbidity, characterized by hyper proliferation of the epidermis. Even though psoriasis etiology is not fully understood, it is believed to be multifactorial with numerous key components. Methods: In order to cast light on the complex molecular interactions in psoriasis vulgaris at both protein-protein interactions and transcriptomics levels, we analyzed a set of microarray gene expression analysis consisting of 170 paired lesional and non-lesional samples. Afterwards, a network analysis was conducted on protein-protein interaction network of differentially expressed genes based on micro- and macro-level network metrics at a systemic level standpoint. Results: We found 17 top communicative genes, all of which experimentally proven to be pivotal in psoriasis were identified in two modules, namely, cell cycle and immune system. Intra- and inter-gene interaction subnetworks from the top communicative genes might provide further insight into the corresponding characteristic mechanisms. Conclusions: Potential gene combinations for therapeutic/diagnostics purposes were identified. Moreover, our proposed pipeline could be of interest to a broader range of biological network analysis studies.
2204.11999
Wayne Hayes
Siyue Wang, Xiaoyin Chen, Brent J. Frederisy, Benedict A. Mbakogu, Amy D. Kanne, Pasha Khosravi, and Wayne B. Hayes
On the current failure -- but bright future -- of topology-driven biological network alignment
21 pages + 7-page Supplementary; 6 tables, 11 Figures
"On the current failure--but bright future--of topology-driven biological network alignment". Advances in Protein Chemistry and Structural Biology, Volume 131 (2022), pp. 1-44
10.1016/bs.apcsb.2022.05.005
null
q-bio.MN q-bio.QM
http://creativecommons.org/licenses/by/4.0/
The function of a protein is defined by its interaction partners. Thus, topology-driven network alignment of the protein-protein interaction (PPI) networks of two species should uncover similar interaction patterns and allow identification of functionally similar proteins. Howver, few of the fifty or more algorithms for PPI network alignment have demonstrated a significant link between network topology and functional similarity, and none have recovered orthologs using network topology alone. We find that the major contributing factors to this failure are: (i) edge densities in current PPI networks are too low to expect topological network alignment to succeed; (ii) when edge densities are high enough, some measures of topological similarity easily uncover functionally similar proteins while others do not; and (iii) most network alignment algorithms fail to optimize their own topological objective functions, hampering their ability to use topology effectively. We demonstrate that SANA-the Simulated Annealing Network Aligner-significantly outperforms existing aligners at optimizing their own objective functions, even achieving near-optimal solutions when optimal solution is known. We offer the first demonstration of global network alignments based on topology alone that align functionally similar proteins with p-values in some cases below 1e-300. We predict that topological network alignment has a bright future as edge densities increase towards the value where good alignments become possible. We demonstrate that when enough common topology is present at high enough edge densities-for example in the recent, partly synthetic networks of the Integrated Interaction Database-topological network alignment easily recovers most orthologs, paving the way towards high-throughput functional prediction based on topology-driven network alignment.
[ { "created": "Mon, 25 Apr 2022 23:44:13 GMT", "version": "v1" } ]
2022-08-29
[ [ "Wang", "Siyue", "" ], [ "Chen", "Xiaoyin", "" ], [ "Frederisy", "Brent J.", "" ], [ "Mbakogu", "Benedict A.", "" ], [ "Kanne", "Amy D.", "" ], [ "Khosravi", "Pasha", "" ], [ "Hayes", "Wayne B.", "" ] ]
The function of a protein is defined by its interaction partners. Thus, topology-driven network alignment of the protein-protein interaction (PPI) networks of two species should uncover similar interaction patterns and allow identification of functionally similar proteins. Howver, few of the fifty or more algorithms for PPI network alignment have demonstrated a significant link between network topology and functional similarity, and none have recovered orthologs using network topology alone. We find that the major contributing factors to this failure are: (i) edge densities in current PPI networks are too low to expect topological network alignment to succeed; (ii) when edge densities are high enough, some measures of topological similarity easily uncover functionally similar proteins while others do not; and (iii) most network alignment algorithms fail to optimize their own topological objective functions, hampering their ability to use topology effectively. We demonstrate that SANA-the Simulated Annealing Network Aligner-significantly outperforms existing aligners at optimizing their own objective functions, even achieving near-optimal solutions when optimal solution is known. We offer the first demonstration of global network alignments based on topology alone that align functionally similar proteins with p-values in some cases below 1e-300. We predict that topological network alignment has a bright future as edge densities increase towards the value where good alignments become possible. We demonstrate that when enough common topology is present at high enough edge densities-for example in the recent, partly synthetic networks of the Integrated Interaction Database-topological network alignment easily recovers most orthologs, paving the way towards high-throughput functional prediction based on topology-driven network alignment.
2111.03122
Marie-Constance Corsi
Marie-Constance Corsi, Sylvain Chevallier, Fabrizio De Vico Fallani and Florian Yger
Functional connectivity ensemble method to enhance BCI performance (FUCONE)
null
null
null
null
q-bio.NC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Functional connectivity is a key approach to investigate oscillatory activities of the brain that provides important insights on the underlying dynamic of neuronal interactions and that is mostly applied for brain activity analysis. Building on the advances in information geometry for brain-computer interface, we propose a novel framework that combines functional connectivity estimators and covariance-based pipelines to classify mental states, such as motor imagery. A Riemannian classifier is trained for each estimator and an ensemble classifier combines the decisions in each feature space. A thorough assessment of the functional connectivity estimators is provided and the best performing pipeline, called FUCONE, is evaluated on different conditions and datasets. Using a meta-analysis to aggregate results across datasets, FUCONE performed significantly better than all state-of-the-art methods. The performance gain is mostly imputable to the improved diversity of the feature spaces, increasing the robustness of the ensemble classifier with respect to the inter- and intra-subject variability.
[ { "created": "Thu, 4 Nov 2021 19:40:08 GMT", "version": "v1" }, { "created": "Wed, 16 Feb 2022 16:31:29 GMT", "version": "v2" } ]
2022-02-17
[ [ "Corsi", "Marie-Constance", "" ], [ "Chevallier", "Sylvain", "" ], [ "Fallani", "Fabrizio De Vico", "" ], [ "Yger", "Florian", "" ] ]
Functional connectivity is a key approach to investigate oscillatory activities of the brain that provides important insights on the underlying dynamic of neuronal interactions and that is mostly applied for brain activity analysis. Building on the advances in information geometry for brain-computer interface, we propose a novel framework that combines functional connectivity estimators and covariance-based pipelines to classify mental states, such as motor imagery. A Riemannian classifier is trained for each estimator and an ensemble classifier combines the decisions in each feature space. A thorough assessment of the functional connectivity estimators is provided and the best performing pipeline, called FUCONE, is evaluated on different conditions and datasets. Using a meta-analysis to aggregate results across datasets, FUCONE performed significantly better than all state-of-the-art methods. The performance gain is mostly imputable to the improved diversity of the feature spaces, increasing the robustness of the ensemble classifier with respect to the inter- and intra-subject variability.
1210.0048
J. C. Phillips
J. C. Phillips
Self-Organized Criticality: A Prophetic Path to Curing Cancer
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While the concepts involved in Self-Organized Criticality have stimulated thousands of theoretical models, only recently have these models addressed problems of biological and clinical importance. Here we outline how SOC can be used to engineer hybrid viral proteins whose properties, extrapolated from those of known strains, may be sufficiently effective to cure cancer.
[ { "created": "Fri, 28 Sep 2012 21:57:27 GMT", "version": "v1" } ]
2012-10-02
[ [ "Phillips", "J. C.", "" ] ]
While the concepts involved in Self-Organized Criticality have stimulated thousands of theoretical models, only recently have these models addressed problems of biological and clinical importance. Here we outline how SOC can be used to engineer hybrid viral proteins whose properties, extrapolated from those of known strains, may be sufficiently effective to cure cancer.
2102.04628
Youming Li
Youming Li and Da-Quan Jiang and Chen Jia
Steady-state joint distribution for first-order stochastic reaction kinetics
33 pages, 6 figures
Phys. Rev. E 104, 024408 (2021)
10.1103/PhysRevE.104.024402
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While the analytical solution for the marginal distribution of a stochastic chemical reaction network has been extensively studied, its joint distribution, i.e. the solution of a high-dimensional chemical master equation, has received much less attention. Here we develop a novel method of computing the exact joint distributions of a wide class of first-order stochastic reaction systems in steady-state conditions. The effectiveness of our method is validated by applying it to four gene expression models of biological significance, including models with 2A peptides, nascent mRNA, gene regulation, translational bursting, and alternative splicing.
[ { "created": "Tue, 9 Feb 2021 03:34:43 GMT", "version": "v1" }, { "created": "Tue, 9 Nov 2021 05:16:19 GMT", "version": "v2" }, { "created": "Sat, 13 Nov 2021 13:12:07 GMT", "version": "v3" } ]
2021-11-16
[ [ "Li", "Youming", "" ], [ "Jiang", "Da-Quan", "" ], [ "Jia", "Chen", "" ] ]
While the analytical solution for the marginal distribution of a stochastic chemical reaction network has been extensively studied, its joint distribution, i.e. the solution of a high-dimensional chemical master equation, has received much less attention. Here we develop a novel method of computing the exact joint distributions of a wide class of first-order stochastic reaction systems in steady-state conditions. The effectiveness of our method is validated by applying it to four gene expression models of biological significance, including models with 2A peptides, nascent mRNA, gene regulation, translational bursting, and alternative splicing.
1901.00497
Shiva Rudraraju
Shiva Rudraraju, Derek E. Moulton, R\'egis Chirat, Alain Goriely, Krishna Garikipati
A computational framework for the morpho-elastic development of molluskan shells by surface and volume growth
Main article is 20 pages long with 15 figures. Supplementary material is 4 pages long with 6 figures and 6 attached movies. To be published in PLOS Computational Biology
PLOS Computational Biology 15(7): e1007213; 2019
10.1371/journal.pcbi.1007213
null
q-bio.QM physics.comp-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mollusk shells are an ideal model system for understanding the morpho-elastic basis of morphological evolution of invertebrates' exoskeletons. During the formation of the shell, the mantle tissue secretes proteins and minerals that calcify to form a new incremental layer of the exoskeleton. Most of the existing literature on the morphology of mollusks is descriptive. The mathematical understanding of the underlying coupling between pre-existing shell morphology, de novo surface deposition and morpho-elastic volume growth is at a nascent stage, primarily limited to reduced geometric representations. Here, we propose a general, three-dimensional computational framework coupling pre-existing morphology, incremental surface growth by accretion, and morpho-elastic volume growth. We exercise this framework by applying it to explain the stepwise morphogenesis of seashells during growth: new material surfaces are laid down by accretive growth on the mantle whose form is determined by its morpho-elastic growth. Calcification of the newest surfaces extends the shell as well as creates a new scaffold that constrains the next growth step. We study the effects of surface and volumetric growth rates, and of previously deposited shell geometries on the resulting modes of mantle deformation, and therefore of the developing shell's morphology. Connections are made to a range of complex shells ornamentations.
[ { "created": "Wed, 2 Jan 2019 21:50:45 GMT", "version": "v1" }, { "created": "Sat, 4 May 2019 18:15:09 GMT", "version": "v2" }, { "created": "Mon, 8 Jul 2019 22:58:09 GMT", "version": "v3" } ]
2019-08-12
[ [ "Rudraraju", "Shiva", "" ], [ "Moulton", "Derek E.", "" ], [ "Chirat", "Régis", "" ], [ "Goriely", "Alain", "" ], [ "Garikipati", "Krishna", "" ] ]
Mollusk shells are an ideal model system for understanding the morpho-elastic basis of morphological evolution of invertebrates' exoskeletons. During the formation of the shell, the mantle tissue secretes proteins and minerals that calcify to form a new incremental layer of the exoskeleton. Most of the existing literature on the morphology of mollusks is descriptive. The mathematical understanding of the underlying coupling between pre-existing shell morphology, de novo surface deposition and morpho-elastic volume growth is at a nascent stage, primarily limited to reduced geometric representations. Here, we propose a general, three-dimensional computational framework coupling pre-existing morphology, incremental surface growth by accretion, and morpho-elastic volume growth. We exercise this framework by applying it to explain the stepwise morphogenesis of seashells during growth: new material surfaces are laid down by accretive growth on the mantle whose form is determined by its morpho-elastic growth. Calcification of the newest surfaces extends the shell as well as creates a new scaffold that constrains the next growth step. We study the effects of surface and volumetric growth rates, and of previously deposited shell geometries on the resulting modes of mantle deformation, and therefore of the developing shell's morphology. Connections are made to a range of complex shells ornamentations.
2407.19343
Jingcheng Xu
Jingcheng Xu and C\'ecile An\'e
A consistent least-squares criterion for calibrating edge lengths in phylogenetic networks
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
In phylogenetic networks, it is desirable to estimate edge lengths in substitutions per site or calendar time. Yet, there is a lack of scalable methods that provide such estimates. Here we consider the problem of obtaining edge length estimates from genetic distances, in the presence of rate variation across genes and lineages, when the network topology is known. We propose a novel criterion based on least-squares that is both consistent and computationally tractable. The crux of our approach is to decompose the genetic distances into two parts, one of which is invariant across displayed trees of the network. The scaled genetic distances are then fitted to the invariant part, while the average scaled genetic distances are fitted to the non-invariant part. We show that this criterion is consistent provided that there exists a tree path between some pair of tips in the network, and that edge lengths in the network are identifiable from average distances. We also provide a constrained variant of this criterion assuming a molecular clock, which can be used to obtain relative edge lengths in calendar time.
[ { "created": "Sat, 27 Jul 2024 21:25:42 GMT", "version": "v1" }, { "created": "Fri, 2 Aug 2024 19:17:20 GMT", "version": "v2" } ]
2024-08-06
[ [ "Xu", "Jingcheng", "" ], [ "Ané", "Cécile", "" ] ]
In phylogenetic networks, it is desirable to estimate edge lengths in substitutions per site or calendar time. Yet, there is a lack of scalable methods that provide such estimates. Here we consider the problem of obtaining edge length estimates from genetic distances, in the presence of rate variation across genes and lineages, when the network topology is known. We propose a novel criterion based on least-squares that is both consistent and computationally tractable. The crux of our approach is to decompose the genetic distances into two parts, one of which is invariant across displayed trees of the network. The scaled genetic distances are then fitted to the invariant part, while the average scaled genetic distances are fitted to the non-invariant part. We show that this criterion is consistent provided that there exists a tree path between some pair of tips in the network, and that edge lengths in the network are identifiable from average distances. We also provide a constrained variant of this criterion assuming a molecular clock, which can be used to obtain relative edge lengths in calendar time.
2407.15132
Ari Tchetchenian
Ari Tchetchenian, Leo Zekelman, Yuqian Chen, Jarrett Rushmore, Fan Zhang, Edward H. Yeterian, Nikos Makris, Yogesh Rathi, Erik Meijering, Yang Song, Lauren J. O'Donnell
Deep multimodal saliency parcellation of cerebellar pathways: linking microstructure and individual function through explainable multitask learning
null
null
null
null
q-bio.NC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Parcellation of human cerebellar pathways is essential for advancing our understanding of the human brain. Existing diffusion MRI tractography parcellation methods have been successful in defining major cerebellar fibre tracts, while relying solely on fibre tract structure. However, each fibre tract may relay information related to multiple cognitive and motor functions of the cerebellum. Hence, it may be beneficial for parcellation to consider the potential importance of the fibre tracts for individual motor and cognitive functional performance measures. In this work, we propose a multimodal data-driven method for cerebellar pathway parcellation, which incorporates both measures of microstructure and connectivity, and measures of individual functional performance. Our method involves first training a multitask deep network to predict various cognitive and motor measures from a set of fibre tract structural features. The importance of each structural feature for predicting each functional measure is then computed, resulting in a set of structure-function saliency values that are clustered to parcellate cerebellar pathways. We refer to our method as Deep Multimodal Saliency Parcellation (DeepMSP), as it computes the saliency of structural measures for predicting cognitive and motor functional performance, with these saliencies being applied to the task of parcellation. Applying DeepMSP we found that it was feasible to identify multiple cerebellar pathway parcels with unique structure-function saliency patterns that were stable across training folds.
[ { "created": "Sun, 21 Jul 2024 12:05:45 GMT", "version": "v1" } ]
2024-07-23
[ [ "Tchetchenian", "Ari", "" ], [ "Zekelman", "Leo", "" ], [ "Chen", "Yuqian", "" ], [ "Rushmore", "Jarrett", "" ], [ "Zhang", "Fan", "" ], [ "Yeterian", "Edward H.", "" ], [ "Makris", "Nikos", "" ], [ ...
Parcellation of human cerebellar pathways is essential for advancing our understanding of the human brain. Existing diffusion MRI tractography parcellation methods have been successful in defining major cerebellar fibre tracts, while relying solely on fibre tract structure. However, each fibre tract may relay information related to multiple cognitive and motor functions of the cerebellum. Hence, it may be beneficial for parcellation to consider the potential importance of the fibre tracts for individual motor and cognitive functional performance measures. In this work, we propose a multimodal data-driven method for cerebellar pathway parcellation, which incorporates both measures of microstructure and connectivity, and measures of individual functional performance. Our method involves first training a multitask deep network to predict various cognitive and motor measures from a set of fibre tract structural features. The importance of each structural feature for predicting each functional measure is then computed, resulting in a set of structure-function saliency values that are clustered to parcellate cerebellar pathways. We refer to our method as Deep Multimodal Saliency Parcellation (DeepMSP), as it computes the saliency of structural measures for predicting cognitive and motor functional performance, with these saliencies being applied to the task of parcellation. Applying DeepMSP we found that it was feasible to identify multiple cerebellar pathway parcels with unique structure-function saliency patterns that were stable across training folds.
1503.07490
Subhajit Sengupta
Subhajit Sengupta and Karthik S. Gurumoorthy and Arunava Banerjee
Sensitivity Analysis for additive STDP rule
On Computational Neuroscience/ 11 pages
null
null
null
q-bio.NC cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spike Timing Dependent Plasticity (STDP) is a Hebbian like synaptic learning rule. The basis of STDP has strong experimental evidences and it depends on precise input and output spike timings. In this paper we show that under biologically plausible spiking regime, slight variability in the spike timing leads to drastically different evolution of synaptic weights when its dynamics are governed by the additive STDP rule.
[ { "created": "Sat, 28 Feb 2015 16:07:20 GMT", "version": "v1" }, { "created": "Mon, 13 Apr 2015 13:59:32 GMT", "version": "v2" } ]
2015-04-14
[ [ "Sengupta", "Subhajit", "" ], [ "Gurumoorthy", "Karthik S.", "" ], [ "Banerjee", "Arunava", "" ] ]
Spike Timing Dependent Plasticity (STDP) is a Hebbian like synaptic learning rule. The basis of STDP has strong experimental evidences and it depends on precise input and output spike timings. In this paper we show that under biologically plausible spiking regime, slight variability in the spike timing leads to drastically different evolution of synaptic weights when its dynamics are governed by the additive STDP rule.
2112.12062
Sebastian Contreras
Philipp D\"onges, Joel Wagner, Sebastian Contreras, Emil Iftekhar, Simon Bauer, Sebastian B. Mohr, Jonas Dehning, Andr\'e Calero Valdez, Mirjam Kretzschmar, Michael M\"as, Kai Nagel, Viola Priesemann
Interplay between risk perception, behaviour, and COVID-19 spread
Final version after peer-review (version of record)
Front. Phys. 10 (2022):842180
10.3389/fphy.2022.842180
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pharmaceutical and non-pharmaceutical interventions (NPIs) have been crucial for controlling COVID-19. They are complemented by voluntary health-protective behaviour, building a complex interplay between risk perception, behaviour, and disease spread. We studied how voluntary health-protective behaviour and vaccination willingness impact the long-term dynamics. We analysed how different levels of mandatory NPIs determine how individuals use their leeway for voluntary actions. If mandatory NPIs are too weak, COVID-19 incidence will surge, implying high morbidity and mortality before individuals react; if they are too strong, one expects a rebound wave once restrictions are lifted, challenging the transition to endemicity. Conversely, moderate mandatory NPIs give individuals time and room to adapt their level of caution, mitigating disease spread effectively. When complemented with high vaccination rates, this also offers a robust way to limit the impacts of the Omicron variant of concern. Altogether, our work highlights the importance of appropriate mandatory NPIs to maximise the impact of individual voluntary actions in pandemic control.
[ { "created": "Wed, 22 Dec 2021 17:33:27 GMT", "version": "v1" }, { "created": "Fri, 11 Mar 2022 11:27:08 GMT", "version": "v2" } ]
2022-03-14
[ [ "Dönges", "Philipp", "" ], [ "Wagner", "Joel", "" ], [ "Contreras", "Sebastian", "" ], [ "Iftekhar", "Emil", "" ], [ "Bauer", "Simon", "" ], [ "Mohr", "Sebastian B.", "" ], [ "Dehning", "Jonas", "" ], [...
Pharmaceutical and non-pharmaceutical interventions (NPIs) have been crucial for controlling COVID-19. They are complemented by voluntary health-protective behaviour, building a complex interplay between risk perception, behaviour, and disease spread. We studied how voluntary health-protective behaviour and vaccination willingness impact the long-term dynamics. We analysed how different levels of mandatory NPIs determine how individuals use their leeway for voluntary actions. If mandatory NPIs are too weak, COVID-19 incidence will surge, implying high morbidity and mortality before individuals react; if they are too strong, one expects a rebound wave once restrictions are lifted, challenging the transition to endemicity. Conversely, moderate mandatory NPIs give individuals time and room to adapt their level of caution, mitigating disease spread effectively. When complemented with high vaccination rates, this also offers a robust way to limit the impacts of the Omicron variant of concern. Altogether, our work highlights the importance of appropriate mandatory NPIs to maximise the impact of individual voluntary actions in pandemic control.
2210.12306
Noah Rosenberg
Jazlyn A. Mooney, Lily Agranat-Tamir, Jonathan K. Pritchard, Noah A. Rosenberg
On the number of genealogical ancestors tracing to the source groups of an admixed population
37 pages, 8 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
In genetically admixed populations, admixed individuals possess ancestry from multiple source groups. Studies of human genetic admixture frequently estimate ancestry components corresponding to fractions of individual genomes that trace to specific ancestral populations. However, the same numerical ancestry fraction can represent a wide array of admixture scenarios. Using a mechanistic model of admixture, we characterize admixture genealogically: how many distinct ancestors from the source populations does the admixture represent? We consider African Americans, for whom continent-level estimates produce a 75-85% value for African ancestry on average and 15-25% for European ancestry. Genetic studies together with key features of African-American demographic history suggest ranges for model parameters. Using the model, we infer that if genealogical lineages of a random African American born during 1960-1965 are traced back until they reach members of source populations, the expected number of genealogical lines terminating with African individuals is 314, and the expected number terminating in Europeans is 51. Across discrete generations, the peak number of African genealogical ancestors occurs for birth cohorts from the early 1700s. The probability exceeds 50% that at least one European ancestor was born more recently than 1835. Our genealogical perspective can contribute to further understanding the admixture processes that underlie admixed populations. For African Americans, the results provide insight both on how many of the ancestors of a typical African American might have been forcibly displaced in the Transatlantic Slave Trade and on how many separate European admixture events might exist in a typical African-American genealogy.
[ { "created": "Sat, 22 Oct 2022 00:01:59 GMT", "version": "v1" } ]
2022-10-25
[ [ "Mooney", "Jazlyn A.", "" ], [ "Agranat-Tamir", "Lily", "" ], [ "Pritchard", "Jonathan K.", "" ], [ "Rosenberg", "Noah A.", "" ] ]
In genetically admixed populations, admixed individuals possess ancestry from multiple source groups. Studies of human genetic admixture frequently estimate ancestry components corresponding to fractions of individual genomes that trace to specific ancestral populations. However, the same numerical ancestry fraction can represent a wide array of admixture scenarios. Using a mechanistic model of admixture, we characterize admixture genealogically: how many distinct ancestors from the source populations does the admixture represent? We consider African Americans, for whom continent-level estimates produce a 75-85% value for African ancestry on average and 15-25% for European ancestry. Genetic studies together with key features of African-American demographic history suggest ranges for model parameters. Using the model, we infer that if genealogical lineages of a random African American born during 1960-1965 are traced back until they reach members of source populations, the expected number of genealogical lines terminating with African individuals is 314, and the expected number terminating in Europeans is 51. Across discrete generations, the peak number of African genealogical ancestors occurs for birth cohorts from the early 1700s. The probability exceeds 50% that at least one European ancestor was born more recently than 1835. Our genealogical perspective can contribute to further understanding the admixture processes that underlie admixed populations. For African Americans, the results provide insight both on how many of the ancestors of a typical African American might have been forcibly displaced in the Transatlantic Slave Trade and on how many separate European admixture events might exist in a typical African-American genealogy.
1211.4766
Pablo Barttfeld
Pablo Barttfeld, Bruno Wicker, Sebasti\'an Cukier, Silvana Navarta, Sergio Lew, Ram\'on Leiguarda and Mariano Sigman
State-dependent changes of connectivity patterns and functional brain network topology in Autism Spectrum Disorder
null
Neuropsychologia. 2012 Oct 5;50(14):3653-3662
10.1016/j.neuropsychologia.2012.09.047
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Anatomical and functional brain studies have converged to the hypothesis that Autism Spectrum Disorders (ASD) are associated with atypical connectivity. Using a modified resting-state paradigm to drive subjects' attention, we provide evidence of a very marked interaction between ASD brain functional connectivity and cognitive state. We show that functional connectivity changes in opposite ways in ASD and typicals as attention shifts from external world towards one's body generated information. Furthermore, ASD subject alter more markedly than typicals their connectivity across cognitive states. Using differences in brain connectivity across conditions, we classified ASD subjects at a performance around 80% while classification based on the connectivity patterns in any given cognitive state were close to chance. Connectivity between the Anterior Insula and dorsal-anterior Cingulate Cortex showed the highest classification accuracy and its strength increased with ASD severity. These results pave the path for diagnosis of mental pathologies based on functional brain networks obtained from a library of mental states.
[ { "created": "Tue, 20 Nov 2012 15:04:40 GMT", "version": "v1" } ]
2012-11-21
[ [ "Barttfeld", "Pablo", "" ], [ "Wicker", "Bruno", "" ], [ "Cukier", "Sebastián", "" ], [ "Navarta", "Silvana", "" ], [ "Lew", "Sergio", "" ], [ "Leiguarda", "Ramón", "" ], [ "Sigman", "Mariano", "" ] ]
Anatomical and functional brain studies have converged to the hypothesis that Autism Spectrum Disorders (ASD) are associated with atypical connectivity. Using a modified resting-state paradigm to drive subjects' attention, we provide evidence of a very marked interaction between ASD brain functional connectivity and cognitive state. We show that functional connectivity changes in opposite ways in ASD and typicals as attention shifts from external world towards one's body generated information. Furthermore, ASD subject alter more markedly than typicals their connectivity across cognitive states. Using differences in brain connectivity across conditions, we classified ASD subjects at a performance around 80% while classification based on the connectivity patterns in any given cognitive state were close to chance. Connectivity between the Anterior Insula and dorsal-anterior Cingulate Cortex showed the highest classification accuracy and its strength increased with ASD severity. These results pave the path for diagnosis of mental pathologies based on functional brain networks obtained from a library of mental states.
1112.0651
Gilles Guillot
Gilles Guillot and Fran\c{c}ois Rousset
Dismantling the Mantel tests
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The simple and partial Mantel tests are routinely used in many areas of evolutionary biology to assess the significance of the association between two or more matrices of distances relative to the same pairs of individuals or demes. Partial Mantel tests rather than simple Mantel tests are widely used to assess the relationship between two variables displaying some form of structure. We show that contrarily to a widely shared belief, partial Mantel tests are not valid in this case, and their bias remains close to that of the simple Mantel test. We confirm that strong biases are expected under a sampling design and spatial correlation parameter drawn from an actual study. The Mantel tests should not be used in case auto-correlation is suspected in both variables compared under the null hypothesis. We outline alternative strategies. The R code used for our computer simulations is distributed as supporting material.
[ { "created": "Sat, 3 Dec 2011 12:52:38 GMT", "version": "v1" }, { "created": "Wed, 31 Oct 2012 09:54:06 GMT", "version": "v2" } ]
2012-11-01
[ [ "Guillot", "Gilles", "" ], [ "Rousset", "François", "" ] ]
The simple and partial Mantel tests are routinely used in many areas of evolutionary biology to assess the significance of the association between two or more matrices of distances relative to the same pairs of individuals or demes. Partial Mantel tests rather than simple Mantel tests are widely used to assess the relationship between two variables displaying some form of structure. We show that contrarily to a widely shared belief, partial Mantel tests are not valid in this case, and their bias remains close to that of the simple Mantel test. We confirm that strong biases are expected under a sampling design and spatial correlation parameter drawn from an actual study. The Mantel tests should not be used in case auto-correlation is suspected in both variables compared under the null hypothesis. We outline alternative strategies. The R code used for our computer simulations is distributed as supporting material.