id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1811.11658
Nadja Tschentscher
Nadja Tschentscher, Anja Ruisinger, Helen Blank, Begona Diaz, Katharina von Kriegstein
Reduced structural connectivity between left auditory thalamus and the motion-sensitive planum temporale in developmental dyslexia
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Developmental dyslexia is characterized by the inability to acquire typical reading and writing skills. Dyslexia has been frequently linked to cerebral cortex alterations; however recent evidence also points towards sensory thalamus dysfunctions: dyslexics showed reduced responses in the left auditory thalamus (medial geniculate body, MGB) during speech processing in contrast to neurotypical readers. In addition, in the visual modality, dyslexics have reduced structural connectivity between the left visual thalamus (lateral geniculate nucleus, LGN) and V5/MT, a cerebral cortex region involved in visual movement processing. Higher LGN-V5/MT connectivity in dyslexics was associated with the faster rapid naming of letters and numbers (RANln), a measure that is highly correlated with reading proficiency. We here tested two hypotheses that were directly derived from these previous findings. First, we tested the hypothesis that dyslexics have reduced structural connectivity between the left MGB and the auditory motion-sensitive part of the left planum temporale (mPT). Second, we hypothesized that the amount of left mPT-MGB connectivity correlates with dyslexics RANln scores. Using diffusion tensor imaging based probabilistic tracking we show that male adults with developmental dyslexia have reduced structural connectivity between the left MGB and the left mPT, confirming the first hypothesis. Stronger left mPT-MGB connectivity was not associated with faster RANnl scores in dyslexics, but in neurotypical readers. Our findings provide first evidence that reduced cortico-thalamic connectivity in the auditory modality is a feature of developmental dyslexia, and that it may also impact on reading related cognitive abilities in neurotypical readers.
[ { "created": "Wed, 28 Nov 2018 16:31:43 GMT", "version": "v1" } ]
2018-11-29
[ [ "Tschentscher", "Nadja", "" ], [ "Ruisinger", "Anja", "" ], [ "Blank", "Helen", "" ], [ "Diaz", "Begona", "" ], [ "von Kriegstein", "Katharina", "" ] ]
Developmental dyslexia is characterized by the inability to acquire typical reading and writing skills. Dyslexia has been frequently linked to cerebral cortex alterations; however recent evidence also points towards sensory thalamus dysfunctions: dyslexics showed reduced responses in the left auditory thalamus (medial geniculate body, MGB) during speech processing in contrast to neurotypical readers. In addition, in the visual modality, dyslexics have reduced structural connectivity between the left visual thalamus (lateral geniculate nucleus, LGN) and V5/MT, a cerebral cortex region involved in visual movement processing. Higher LGN-V5/MT connectivity in dyslexics was associated with the faster rapid naming of letters and numbers (RANln), a measure that is highly correlated with reading proficiency. We here tested two hypotheses that were directly derived from these previous findings. First, we tested the hypothesis that dyslexics have reduced structural connectivity between the left MGB and the auditory motion-sensitive part of the left planum temporale (mPT). Second, we hypothesized that the amount of left mPT-MGB connectivity correlates with dyslexics RANln scores. Using diffusion tensor imaging based probabilistic tracking we show that male adults with developmental dyslexia have reduced structural connectivity between the left MGB and the left mPT, confirming the first hypothesis. Stronger left mPT-MGB connectivity was not associated with faster RANnl scores in dyslexics, but in neurotypical readers. Our findings provide first evidence that reduced cortico-thalamic connectivity in the auditory modality is a feature of developmental dyslexia, and that it may also impact on reading related cognitive abilities in neurotypical readers.
2401.17478
Joaquin Torres
Gustavo Menesse and Akke Mats Houben and Jordi Soriano and Joaquin J. Torres
Integrated Information Decomposition Unveils Major Structural Traits of $In$ $Silico$ and $In$ $Vitro$ Neuronal Networks
13 pages, 5 figures
Chaos: An Interdisciplinary Journal of Nonlinear Science 34(5), 053139 (2024)
10.1063/5.0201454
null
q-bio.NC cond-mat.dis-nn
http://creativecommons.org/licenses/by-nc-nd/4.0/
The properties of complex networked systems arise from the interplay between the dynamics of their elements and the underlying topology. Thus, to understand their behaviour, it is crucial to convene as much information as possible about their topological organization. However, in a large systems such as neuronal networks, the reconstruction of such topology is usually carried out from the information encoded in the dynamics on the network, such as spike train time series, and by measuring the Transfer Entropy between system elements. The topological information recovered by these methods does not necessarily capture the connectivity layout, but rather the causal flow of information between elements. New theoretical frameworks, such as Integrated Information Decomposition ($\Phi$-ID), allow to explore the modes in which information can flow between parts of a system, opening a rich landscape of interactions between network topology, dynamics and information. Here, we apply $\Phi$-ID on $in$ $silico$ and $in$ $vitro$ data to decompose the usual Transfer Entropy measure into different modes of information transfer, namely synergistic, redundant or unique. We demonstrate that the unique information transfer is the most relevant measure to uncover structural topological details from network activity data, while redundant information only introduces residual information for this application. Although the retrieved network connectivity is still functional, it captures more details of the underlying structural topology by avoiding to take into account emergent high-order interactions and information redundancy between elements, which are important for the functional behavior, but mask the detection of direct simple interactions between elements constituted by the structural network topology.
[ { "created": "Tue, 30 Jan 2024 22:23:05 GMT", "version": "v1" }, { "created": "Mon, 17 Jun 2024 10:53:14 GMT", "version": "v2" } ]
2024-06-18
[ [ "Menesse", "Gustavo", "" ], [ "Houben", "Akke Mats", "" ], [ "Soriano", "Jordi", "" ], [ "Torres", "Joaquin J.", "" ] ]
The properties of complex networked systems arise from the interplay between the dynamics of their elements and the underlying topology. Thus, to understand their behaviour, it is crucial to convene as much information as possible about their topological organization. However, in a large systems such as neuronal networks, the reconstruction of such topology is usually carried out from the information encoded in the dynamics on the network, such as spike train time series, and by measuring the Transfer Entropy between system elements. The topological information recovered by these methods does not necessarily capture the connectivity layout, but rather the causal flow of information between elements. New theoretical frameworks, such as Integrated Information Decomposition ($\Phi$-ID), allow to explore the modes in which information can flow between parts of a system, opening a rich landscape of interactions between network topology, dynamics and information. Here, we apply $\Phi$-ID on $in$ $silico$ and $in$ $vitro$ data to decompose the usual Transfer Entropy measure into different modes of information transfer, namely synergistic, redundant or unique. We demonstrate that the unique information transfer is the most relevant measure to uncover structural topological details from network activity data, while redundant information only introduces residual information for this application. Although the retrieved network connectivity is still functional, it captures more details of the underlying structural topology by avoiding to take into account emergent high-order interactions and information redundancy between elements, which are important for the functional behavior, but mask the detection of direct simple interactions between elements constituted by the structural network topology.
2112.02027
Grace Lindsay
Grace W. Lindsay, Josh Merel, Tom Mrsic-Flogel, Maneesh Sahani
Divergent representations of ethological visual inputs emerge from supervised, unsupervised, and reinforcement learning
23 total pages, 9 main figures, 8 Supplementary figures
null
null
null
q-bio.NC cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Artificial neural systems trained using reinforcement, supervised, and unsupervised learning all acquire internal representations of high dimensional input. To what extent these representations depend on the different learning objectives is largely unknown. Here we compare the representations learned by eight different convolutional neural networks, each with identical ResNet architectures and trained on the same family of egocentric images, but embedded within different learning systems. Specifically, the representations are trained to guide action in a compound reinforcement learning task; to predict one or a combination of three task-related targets with supervision; or using one of three different unsupervised objectives. Using representational similarity analysis, we find that the network trained with reinforcement learning differs most from the other networks. Using metrics inspired by the neuroscience literature, we find that the model trained with reinforcement learning has a sparse and high-dimensional representation wherein individual images are represented with very different patterns of neural activity. Further analysis suggests these representations may arise in order to guide long-term behavior and goal-seeking in the RL agent. Finally, we compare the representations learned by the RL agent to neural activity from mouse visual cortex and find it to perform as well or better than other models. Our results provide insights into how the properties of neural representations are influenced by objective functions and can inform transfer learning approaches.
[ { "created": "Fri, 3 Dec 2021 17:18:09 GMT", "version": "v1" }, { "created": "Tue, 8 Feb 2022 15:53:36 GMT", "version": "v2" } ]
2022-02-09
[ [ "Lindsay", "Grace W.", "" ], [ "Merel", "Josh", "" ], [ "Mrsic-Flogel", "Tom", "" ], [ "Sahani", "Maneesh", "" ] ]
Artificial neural systems trained using reinforcement, supervised, and unsupervised learning all acquire internal representations of high dimensional input. To what extent these representations depend on the different learning objectives is largely unknown. Here we compare the representations learned by eight different convolutional neural networks, each with identical ResNet architectures and trained on the same family of egocentric images, but embedded within different learning systems. Specifically, the representations are trained to guide action in a compound reinforcement learning task; to predict one or a combination of three task-related targets with supervision; or using one of three different unsupervised objectives. Using representational similarity analysis, we find that the network trained with reinforcement learning differs most from the other networks. Using metrics inspired by the neuroscience literature, we find that the model trained with reinforcement learning has a sparse and high-dimensional representation wherein individual images are represented with very different patterns of neural activity. Further analysis suggests these representations may arise in order to guide long-term behavior and goal-seeking in the RL agent. Finally, we compare the representations learned by the RL agent to neural activity from mouse visual cortex and find it to perform as well or better than other models. Our results provide insights into how the properties of neural representations are influenced by objective functions and can inform transfer learning approaches.
0903.1557
Douady Stephane
Etienne Couturier, Sylvain Courrech du Pont, Stephane Douady
Steric Constraints as a Global Regulation of Growing Leaf Shape
6 pages 4 figures, Supplementary materials (8 pages, 7 figures)
PLoS ONE 4(11): e7968
10.1371/journal.pone.0007968 (2009)
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Shape is one of the important characteristics for the structures observed in living organisms. Whereas biologists have proposed models where the shape is controlled on a molecular level [1], physicists, following Turing [2] and d'Arcy Thomson [3], have developed theories where patterns arise spontaneously [4]. Here, we propose a volume constraint that restricts the possible shapes of leaves. Focusing on palmate leaves, the central observation is that developing leaves first grow folded inside a bud, limited by the previous and subsequent leaves. We show that growing folded in this small volume controls globally the leaf development. This induces a direct relationship between the way it was folded and the final unfolded shape of the leaf. These dependencies can be approximated as simple geometrical relationships that we confirm on both folded embryonic and unfolded mature leaves. We find that independently of their position in the phylogenetic tree, these relationships work for folded species, but do not work for non-folded species. This steric constraint is a simple way to impose a global regulation for the leaf growth. Such steric regulation should be more general and considered as a new simple means of global regulation.
[ { "created": "Mon, 9 Mar 2009 13:42:34 GMT", "version": "v1" } ]
2010-10-11
[ [ "Couturier", "Etienne", "" ], [ "Pont", "Sylvain Courrech du", "" ], [ "Douady", "Stephane", "" ] ]
Shape is one of the important characteristics for the structures observed in living organisms. Whereas biologists have proposed models where the shape is controlled on a molecular level [1], physicists, following Turing [2] and d'Arcy Thomson [3], have developed theories where patterns arise spontaneously [4]. Here, we propose a volume constraint that restricts the possible shapes of leaves. Focusing on palmate leaves, the central observation is that developing leaves first grow folded inside a bud, limited by the previous and subsequent leaves. We show that growing folded in this small volume controls globally the leaf development. This induces a direct relationship between the way it was folded and the final unfolded shape of the leaf. These dependencies can be approximated as simple geometrical relationships that we confirm on both folded embryonic and unfolded mature leaves. We find that independently of their position in the phylogenetic tree, these relationships work for folded species, but do not work for non-folded species. This steric constraint is a simple way to impose a global regulation for the leaf growth. Such steric regulation should be more general and considered as a new simple means of global regulation.
0704.3808
Jakob Enemark
Jakob Enemark and Kim Sneppen
On Gene Duplication Models for Evolving Regulatory Networks
14 pages, 7 figures
null
10.1088/1742-5468/2007/11/P11007
null
q-bio.PE q-bio.OT
null
Background: Duplication of genes is important for evolution of molecular networks. Many authors have therefore considered gene duplication as a driving force in shaping the topology of molecular networks. In particular it has been noted that growth via duplication would act as an implicit way of preferential attachment, and thereby provide the observed broad degree distributions of molecular networks. Results: We extend current models of gene duplication and rewiring by including directions and the fact that molecular networks are not a result of unidirectional growth. We introduce upstream sites and downstream shapes to quantify potential links during duplication and rewiring. We find that this in itself generates the observed scaling of transcription factors for genome sites in procaryotes. The dynamical model can generate a scale-free degree distribution, p(k)∝ 1/k^γ, with exponent γ=1 in the non-growing case, and with γ>1 when the network is growing. Conclusions: We find that duplication of genes followed by substantial recombination of upstream regions could generate main features of genetic regulatory networks. Our steady state degree distribution is however to broad to be consistent with data, thereby suggesting that selective pruning acts as a main additional constraint on duplicated genes. Our analysis shows that gene duplication can only be a main cause for the observed broad degree distributions, if there is also substantial recombinations between upstream regions of genes.
[ { "created": "Sat, 28 Apr 2007 16:16:09 GMT", "version": "v1" }, { "created": "Tue, 3 Jul 2007 10:06:28 GMT", "version": "v2" } ]
2009-11-13
[ [ "Enemark", "Jakob", "" ], [ "Sneppen", "Kim", "" ] ]
Background: Duplication of genes is important for evolution of molecular networks. Many authors have therefore considered gene duplication as a driving force in shaping the topology of molecular networks. In particular it has been noted that growth via duplication would act as an implicit way of preferential attachment, and thereby provide the observed broad degree distributions of molecular networks. Results: We extend current models of gene duplication and rewiring by including directions and the fact that molecular networks are not a result of unidirectional growth. We introduce upstream sites and downstream shapes to quantify potential links during duplication and rewiring. We find that this in itself generates the observed scaling of transcription factors for genome sites in procaryotes. The dynamical model can generate a scale-free degree distribution, p(k)∝ 1/k^γ, with exponent γ=1 in the non-growing case, and with γ>1 when the network is growing. Conclusions: We find that duplication of genes followed by substantial recombination of upstream regions could generate main features of genetic regulatory networks. Our steady state degree distribution is however to broad to be consistent with data, thereby suggesting that selective pruning acts as a main additional constraint on duplicated genes. Our analysis shows that gene duplication can only be a main cause for the observed broad degree distributions, if there is also substantial recombinations between upstream regions of genes.
2405.10432
Michael Baker Ph.D.
Yoshinao Katsu, Jiawen Zhang, Michael E. Baker
Lysine-Cysteine-Serine-Tryptophan Inserted into the DNA-Binding Domain of Human Mineralocorticoid Receptor Increases Transcriptional Activation by Aldosterone
21 pages, 5 figures
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Due to alternative splicing in an ancestral DNA-binding domain (DBD) of the mineralocorticoid receptor (MR), humans contain two almost identical MR transcripts with either 984 amino acids (MR-984) or 988 amino acids (MR-988), in which their DBDs differ by only four amino acids, Lys,Cys,Ser,Trp (KCSW). Human MRs also contain mutations at two sites, codons 180 and 241, in the amino terminal domain (NTD). Together, there are five distinct full-length human MR genes in GenBank. Human MR-984, which was cloned in 1987, has been extensively studied. Human MR-988, cloned in 1995, contains KCSW in its DBD. Neither this human MR-988 nor the other human MR-988 genes have been studied for their response to aldosterone and other corticosteroids. Here, we report that transcriptional activation of human MR-988 by aldosterone is increased by about 50% compared to activation of human MR-984 in HEK293 cells transfected with the TAT3 promoter, while the half-maximal response (EC50) is similar for aldosterone activation of MR-984 and MR-988. Transcriptional activation of human MR also depends on the amino acids at codons 180 and 241. Interestingly, in HEK293 cells transfected with the MMTV promoter, transcriptional activation by aldosterone of human MR-988 is similar to activation of human MR-984, indicating that the promoter has a role in the regulation of the response of human MR-988 to aldosterone. The physiological responses to aldosterone and other corticosteroids in humans with MR genes containing KCSW and with differences at codons 180 and 241 in the NTD warrant investigation.
[ { "created": "Thu, 16 May 2024 20:30:18 GMT", "version": "v1" } ]
2024-05-20
[ [ "Katsu", "Yoshinao", "" ], [ "Zhang", "Jiawen", "" ], [ "Baker", "Michael E.", "" ] ]
Due to alternative splicing in an ancestral DNA-binding domain (DBD) of the mineralocorticoid receptor (MR), humans contain two almost identical MR transcripts with either 984 amino acids (MR-984) or 988 amino acids (MR-988), in which their DBDs differ by only four amino acids, Lys,Cys,Ser,Trp (KCSW). Human MRs also contain mutations at two sites, codons 180 and 241, in the amino terminal domain (NTD). Together, there are five distinct full-length human MR genes in GenBank. Human MR-984, which was cloned in 1987, has been extensively studied. Human MR-988, cloned in 1995, contains KCSW in its DBD. Neither this human MR-988 nor the other human MR-988 genes have been studied for their response to aldosterone and other corticosteroids. Here, we report that transcriptional activation of human MR-988 by aldosterone is increased by about 50% compared to activation of human MR-984 in HEK293 cells transfected with the TAT3 promoter, while the half-maximal response (EC50) is similar for aldosterone activation of MR-984 and MR-988. Transcriptional activation of human MR also depends on the amino acids at codons 180 and 241. Interestingly, in HEK293 cells transfected with the MMTV promoter, transcriptional activation by aldosterone of human MR-988 is similar to activation of human MR-984, indicating that the promoter has a role in the regulation of the response of human MR-988 to aldosterone. The physiological responses to aldosterone and other corticosteroids in humans with MR genes containing KCSW and with differences at codons 180 and 241 in the NTD warrant investigation.
1710.03071
R. Ozgur Doruk
Ozgur Doruk, Kechen Zhang
Building a Dynamical Network Model from Neural Spiking Data: Application of Poisson Likelihood
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Research showed that, the information transmitted in biological neurons is encoded in the instants of successive action potentials or their firing rate. In addition to that, in-vivo operation of the neuron makes measurement difficult and thus continuous data collection is restricted. Due to those reasons, classical mean square estimation techniques that are frequently used in neural network training is very difficult to apply. In such situations, point processes and related likelihood methods may be beneficial. In this study, we will present how one can apply certain methods to use the stimulus-response data obtained from a neural process in the mathematical modeling of a neuron. The study is theoretical in nature and it will be supported by simulations. In addition it will be compared to a similar study performed on the same network model.
[ { "created": "Mon, 9 Oct 2017 13:05:20 GMT", "version": "v1" }, { "created": "Mon, 8 Jan 2018 19:14:34 GMT", "version": "v2" } ]
2018-01-10
[ [ "Doruk", "Ozgur", "" ], [ "Zhang", "Kechen", "" ] ]
Research showed that, the information transmitted in biological neurons is encoded in the instants of successive action potentials or their firing rate. In addition to that, in-vivo operation of the neuron makes measurement difficult and thus continuous data collection is restricted. Due to those reasons, classical mean square estimation techniques that are frequently used in neural network training is very difficult to apply. In such situations, point processes and related likelihood methods may be beneficial. In this study, we will present how one can apply certain methods to use the stimulus-response data obtained from a neural process in the mathematical modeling of a neuron. The study is theoretical in nature and it will be supported by simulations. In addition it will be compared to a similar study performed on the same network model.
q-bio/0701053
Giuseppe Vitiello
Walter J. Freeman and Giuseppe Vitiello
Dissipation and spontaneous symmetry breaking in brain dynamics
Restyled, slight changes in title and abstract, updated bibliography, J. Phys. A: Math. Theor. Vol. 41 (2008) in print
null
10.1088/1751-8113/41/30/304042
null
q-bio.NC quant-ph
null
We compare the predictions of the dissipative quantum model of brain with neurophysiological data collected from electroencephalograms resulting from high-density arrays fixed on the surfaces of primary sensory and limbic areas of trained rabbits and cats. Functional brain imaging in relation to behavior reveals the formation of coherent domains of synchronized neuronal oscillatory activity and phase transitions predicted by the dissipative model.
[ { "created": "Wed, 31 Jan 2007 05:50:47 GMT", "version": "v1" }, { "created": "Tue, 26 Feb 2008 09:48:44 GMT", "version": "v2" } ]
2009-11-13
[ [ "Freeman", "Walter J.", "" ], [ "Vitiello", "Giuseppe", "" ] ]
We compare the predictions of the dissipative quantum model of brain with neurophysiological data collected from electroencephalograms resulting from high-density arrays fixed on the surfaces of primary sensory and limbic areas of trained rabbits and cats. Functional brain imaging in relation to behavior reveals the formation of coherent domains of synchronized neuronal oscillatory activity and phase transitions predicted by the dissipative model.
1206.3148
Emilio N.M. Cirillo
D. Andreucci, D. Bellaveglia, E. N. M. Cirillo, S. Marconi
Effect of intracellular diffusion on current-voltage curves in potassium channels
13 pages, 5 figure
Discrete and Continuous Dynamical System Series B, 19, 1837-1853, 2014
null
null
q-bio.CB physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the effect of intracellular ion diffusion on ionic currents permeating through the cell membrane. Ion flux across the cell membrane is mediated by special proteins forming specific channels. The structure of potassium channels have been widely studied in recent years with remarkable results: very precise measurements of the true current across a single channel are now available. Nevertheless, a complete understanding of the behavior of the channel is still lacking, though molecular dynamics and kinetic models have provided partial insights. In this paper we demonstrate, by analyzing the KcsA current-voltage currents via a suitable lattice model, that intracellular diffusion plays a crucial role in the permeation phenomenon. The interplay between the selectivity filter behavior and the ion diffusion in the intracellular side allows a full explanation of the current-voltage curves.
[ { "created": "Thu, 14 Jun 2012 15:43:23 GMT", "version": "v1" } ]
2015-02-20
[ [ "Andreucci", "D.", "" ], [ "Bellaveglia", "D.", "" ], [ "Cirillo", "E. N. M.", "" ], [ "Marconi", "S.", "" ] ]
We study the effect of intracellular ion diffusion on ionic currents permeating through the cell membrane. Ion flux across the cell membrane is mediated by special proteins forming specific channels. The structure of potassium channels have been widely studied in recent years with remarkable results: very precise measurements of the true current across a single channel are now available. Nevertheless, a complete understanding of the behavior of the channel is still lacking, though molecular dynamics and kinetic models have provided partial insights. In this paper we demonstrate, by analyzing the KcsA current-voltage currents via a suitable lattice model, that intracellular diffusion plays a crucial role in the permeation phenomenon. The interplay between the selectivity filter behavior and the ion diffusion in the intracellular side allows a full explanation of the current-voltage curves.
1505.03560
Sebastiano Stramaglia
Ibai Diez, Asier Erramuzpe, Inaki Escudero, Beatriz Mateos, Alberto Cabrera, Daniele Marinazzo, Ernesto J. Sanz-Arigita, Sebastiano Stramaglia and Jesus M. Cortes
Information flow between resting state networks
47 pages, 5 figures, 4 tables, 3 supplementary figures. Accepted for publication in Brain Connectivity in its current form
null
null
null
q-bio.NC physics.data-an q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The resting brain dynamics self-organizes into a finite number of correlated patterns known as resting state networks (RSNs). It is well known that techniques like independent component analysis can separate the brain activity at rest to provide such RSNs, but the specific pattern of interaction between RSNs is not yet fully understood. To this aim, we propose here a novel method to compute the information flow (IF) between different RSNs from resting state magnetic resonance imaging. After haemodynamic response function blind deconvolution of all voxel signals, and under the hypothesis that RSNs define regions of interest, our method first uses principal component analysis to reduce dimensionality in each RSN to next compute IF (estimated here in terms of Transfer Entropy) between the different RSNs by systematically increasing k (the number of principal components used in the calculation). When k = 1, this method is equivalent to computing IF using the average of all voxel activities in each RSN. For k greater than one our method calculates the k-multivariate IF between the different RSNs. We find that the average IF among RSNs is dimension-dependent, increasing from k =1 (i.e., the average voxels activity) up to a maximum occurring at k =5 to finally decay to zero for k greater than 10. This suggests that a small number of components (close to 5) is sufficient to describe the IF pattern between RSNs. Our method - addressing differences in IF between RSNs for any generic data - can be used for group comparison in health or disease. To illustrate this, we have calculated the interRSNs IF in a dataset of Alzheimer's Disease (AD) to find that the most significant differences between AD and controls occurred for k =2, in addition to AD showing increased IF w.r.t. controls.
[ { "created": "Wed, 13 May 2015 21:45:44 GMT", "version": "v1" } ]
2015-05-15
[ [ "Diez", "Ibai", "" ], [ "Erramuzpe", "Asier", "" ], [ "Escudero", "Inaki", "" ], [ "Mateos", "Beatriz", "" ], [ "Cabrera", "Alberto", "" ], [ "Marinazzo", "Daniele", "" ], [ "Sanz-Arigita", "Ernesto J.", "" ], [ "Stramaglia", "Sebastiano", "" ], [ "Cortes", "Jesus M.", "" ] ]
The resting brain dynamics self-organizes into a finite number of correlated patterns known as resting state networks (RSNs). It is well known that techniques like independent component analysis can separate the brain activity at rest to provide such RSNs, but the specific pattern of interaction between RSNs is not yet fully understood. To this aim, we propose here a novel method to compute the information flow (IF) between different RSNs from resting state magnetic resonance imaging. After haemodynamic response function blind deconvolution of all voxel signals, and under the hypothesis that RSNs define regions of interest, our method first uses principal component analysis to reduce dimensionality in each RSN to next compute IF (estimated here in terms of Transfer Entropy) between the different RSNs by systematically increasing k (the number of principal components used in the calculation). When k = 1, this method is equivalent to computing IF using the average of all voxel activities in each RSN. For k greater than one our method calculates the k-multivariate IF between the different RSNs. We find that the average IF among RSNs is dimension-dependent, increasing from k =1 (i.e., the average voxels activity) up to a maximum occurring at k =5 to finally decay to zero for k greater than 10. This suggests that a small number of components (close to 5) is sufficient to describe the IF pattern between RSNs. Our method - addressing differences in IF between RSNs for any generic data - can be used for group comparison in health or disease. To illustrate this, we have calculated the interRSNs IF in a dataset of Alzheimer's Disease (AD) to find that the most significant differences between AD and controls occurred for k =2, in addition to AD showing increased IF w.r.t. controls.
1205.5433
Peter Jarvis
P. D. Jarvis and J. G. Sumner
Adventures in Invariant Theory
12 pp, includes supplementary discussion of examples
ANZIAM J. 56 (2014) 105-115
10.1017/S1446181114000327
null
q-bio.QM math.GR math.ST q-bio.PE quant-ph stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We provide an introduction to enumerating and constructing invariants of group representations via character methods. The problem is contextualised via two case studies arising from our recent work: entanglement measures, for characterising the structure of state spaces for composite quantum systems; and Markov invariants, a robust alternative to parameter-estimation intensive methods of statistical inference in molecular phylogenetics.
[ { "created": "Wed, 23 May 2012 05:04:58 GMT", "version": "v1" }, { "created": "Wed, 24 Jul 2013 03:32:05 GMT", "version": "v2" } ]
2019-02-20
[ [ "Jarvis", "P. D.", "" ], [ "Sumner", "J. G.", "" ] ]
We provide an introduction to enumerating and constructing invariants of group representations via character methods. The problem is contextualised via two case studies arising from our recent work: entanglement measures, for characterising the structure of state spaces for composite quantum systems; and Markov invariants, a robust alternative to parameter-estimation intensive methods of statistical inference in molecular phylogenetics.
2106.15365
Catharina Elisabeth Graafland
Catharina Elisabeth Graafland and Jos\'e Manuel Guti\'errez
Learning complex dependency structure of gene regulatory networks from high dimensional micro-array data with Gaussian Bayesian networks
20 pages, 5 figures
Sci Rep 12, 18704 (2022)
10.1038/s41598-022-21957-z
null
q-bio.MN cs.LG q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Gene expression datasets consist of thousand of genes with relatively small samplesizes (i.e. are large-$p$-small-$n$). Moreover, dependencies of various orders co-exist in the datasets. In the Undirected probabilistic Graphical Model (UGM) framework the Glasso algorithm has been proposed to deal with high dimensional micro-array datasets forcing sparsity. Also, modifications of the default Glasso algorithm are developed to overcome the problem of complex interaction structure. In this work we advocate the use of a simple score-based Hill Climbing algorithm (HC) that learns Gaussian Bayesian Networks (BNs) leaning on Directed Acyclic Graphs (DAGs). We compare HC with Glasso and its modifications in the UGM framework on their capability to reconstruct GRNs from micro-array data belonging to the Escherichia Coli genome. We benefit from the analytical properties of the Joint Probability Density (JPD) function on which both directed and undirected PGMs build to convert DAGs to UGMs. We conclude that dependencies in complex data are learned best by the HC algorithm, presenting them most accurately and efficiently, simultaneously modelling strong local and weaker but significant global connections coexisting in the gene expression dataset. The HC algorithm adapts intrinsically to the complex dependency structure of the dataset, without forcing a specific structure in advance. On the contrary, Glasso and modifications model unnecessary dependencies at the expense of the probabilistic information in the network and of a structural bias in the JPD function that can only be relieved including many parameters.
[ { "created": "Mon, 28 Jun 2021 15:04:35 GMT", "version": "v1" }, { "created": "Mon, 14 Feb 2022 17:34:12 GMT", "version": "v2" } ]
2022-12-21
[ [ "Graafland", "Catharina Elisabeth", "" ], [ "Gutiérrez", "José Manuel", "" ] ]
Gene expression datasets consist of thousand of genes with relatively small samplesizes (i.e. are large-$p$-small-$n$). Moreover, dependencies of various orders co-exist in the datasets. In the Undirected probabilistic Graphical Model (UGM) framework the Glasso algorithm has been proposed to deal with high dimensional micro-array datasets forcing sparsity. Also, modifications of the default Glasso algorithm are developed to overcome the problem of complex interaction structure. In this work we advocate the use of a simple score-based Hill Climbing algorithm (HC) that learns Gaussian Bayesian Networks (BNs) leaning on Directed Acyclic Graphs (DAGs). We compare HC with Glasso and its modifications in the UGM framework on their capability to reconstruct GRNs from micro-array data belonging to the Escherichia Coli genome. We benefit from the analytical properties of the Joint Probability Density (JPD) function on which both directed and undirected PGMs build to convert DAGs to UGMs. We conclude that dependencies in complex data are learned best by the HC algorithm, presenting them most accurately and efficiently, simultaneously modelling strong local and weaker but significant global connections coexisting in the gene expression dataset. The HC algorithm adapts intrinsically to the complex dependency structure of the dataset, without forcing a specific structure in advance. On the contrary, Glasso and modifications model unnecessary dependencies at the expense of the probabilistic information in the network and of a structural bias in the JPD function that can only be relieved including many parameters.
2109.01176
Artem Novozhilov
Alexander S. Bratus, Anastasiia V. Korushkina, Artem S. Novozhilov
Food webs and the principle of evolutionary adaptation
14 pages, 7 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
A principle of evolutionary adaptation is applied to the Lotka--Volterra models, in particular to the food webs. We present a relatively simple computational algorithm of optimization with respect to a given criterion. This algorithm boils down to a sequence of easy to solve linear programming problems. As a criterion for the optimization we use the total weighted population size of the given community and an ecological fitness, which is an analogue of the potential energy in physics. We show by computational experiments that it is almost always possible to substantially increase the total weighed population size for an especially simple food web -- food chain; we also show that food chains are evolutionary unstable under the given optimization criteria and, if allowed, evolve into more complicated structures of food webs.
[ { "created": "Thu, 2 Sep 2021 19:07:50 GMT", "version": "v1" } ]
2021-09-06
[ [ "Bratus", "Alexander S.", "" ], [ "Korushkina", "Anastasiia V.", "" ], [ "Novozhilov", "Artem S.", "" ] ]
A principle of evolutionary adaptation is applied to the Lotka--Volterra models, in particular to the food webs. We present a relatively simple computational algorithm of optimization with respect to a given criterion. This algorithm boils down to a sequence of easy to solve linear programming problems. As a criterion for the optimization we use the total weighted population size of the given community and an ecological fitness, which is an analogue of the potential energy in physics. We show by computational experiments that it is almost always possible to substantially increase the total weighed population size for an especially simple food web -- food chain; we also show that food chains are evolutionary unstable under the given optimization criteria and, if allowed, evolve into more complicated structures of food webs.
1411.2761
Oleg Usatenko
S.S. Melnik and O.V. Usatenko
Entropy and long-range correlations in DNA sequences
8 pages, 5 figures
Comput.Biol.Chem.53, 26 (2014)
10.1016/j.compbiolchem.2014.08.006
null
q-bio.OT cond-mat.soft cond-mat.stat-mech physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze the structure of DNA molecules of different organisms by using the additive Markov chain approach. Transforming nucleotide sequences into binary strings, we perform statistical analysis of the corresponding "texts". We develop the theory of N-step additive binary stationary ergodic Markov chains and analyze their differential entropy. Supposing that the correlations are weak we express the conditional probability function of the chain by means of the pair correlation function and represent the entropy as a functional of the pair correlator. Since the model uses two point correlators instead of probability of block occurring, it makes possible to calculate the entropy of subsequences at much longer distances than with the use of the standard methods. We utilize the obtained analytical result for numerical evaluation of the entropy of coarse-grained DNA texts. We believe that the entropy study can be used for biological classification of living species.
[ { "created": "Tue, 11 Nov 2014 10:53:55 GMT", "version": "v1" } ]
2014-11-14
[ [ "Melnik", "S. S.", "" ], [ "Usatenko", "O. V.", "" ] ]
We analyze the structure of DNA molecules of different organisms by using the additive Markov chain approach. Transforming nucleotide sequences into binary strings, we perform statistical analysis of the corresponding "texts". We develop the theory of N-step additive binary stationary ergodic Markov chains and analyze their differential entropy. Supposing that the correlations are weak we express the conditional probability function of the chain by means of the pair correlation function and represent the entropy as a functional of the pair correlator. Since the model uses two point correlators instead of probability of block occurring, it makes possible to calculate the entropy of subsequences at much longer distances than with the use of the standard methods. We utilize the obtained analytical result for numerical evaluation of the entropy of coarse-grained DNA texts. We believe that the entropy study can be used for biological classification of living species.
1606.03813
Adam Marblestone
Adam Marblestone, Greg Wayne, Konrad Kording
Towards an integration of deep learning and neuroscience
null
null
10.3389/fncom.2016.00094
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) these cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses.
[ { "created": "Mon, 13 Jun 2016 05:08:39 GMT", "version": "v1" } ]
2020-02-04
[ [ "Marblestone", "Adam", "" ], [ "Wayne", "Greg", "" ], [ "Kording", "Konrad", "" ] ]
Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) these cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses.
1208.5954
Adel Dayarian
Adel Dayarian and Boris I Shraiman
How to infer relative fitness from a sample of genomic sequences
null
null
null
NSF-KITP-12-148
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mounting evidence suggests that natural populations can harbor extensive fitness diversity with numerous genomic loci under selection. It is also known that genealogical trees for populations under selection are quantifiably different from those expected under neutral evolution and described statistically by Kingman's coalescent. While differences in the statistical structure of genealogies have long been used as a test for the presence of selection, the full extent of the information that they contain has not been exploited. Here we shall demonstrate that the shape of the reconstructed genealogical tree for a moderately large number of random genomic samples taken from a fitness diverse, but otherwise unstructured asexual population can be used to predict the relative fitness of individuals within the sample. To achieve this we define a heuristic algorithm, which we test in silico using simulations of a Wright-Fisher model for a realistic range of mutation rates and selection strength. Our inferred fitness ranking is based on a linear discriminator which identifies rapidly coalescing lineages in the reconstructed tree. Inferred fitness ranking correlates strongly with actual fitness, with a genome in the top 10% ranked being in the top 20% fittest with false discovery rate of 0.1-0.3 depending on the mutation/selection parameters. The ranking also enables to predict the genotypes that future populations inherit from the present one. While the inference accuracy increases monotonically with sample size, samples of 200 nearly saturate the performance. We propose that our approach can be used for inferring relative fitness of genomes obtained in single-cell sequencing of tumors and in monitoring viral outbreaks.
[ { "created": "Wed, 29 Aug 2012 16:07:26 GMT", "version": "v1" }, { "created": "Thu, 3 Jan 2013 00:14:40 GMT", "version": "v2" } ]
2013-01-04
[ [ "Dayarian", "Adel", "" ], [ "Shraiman", "Boris I", "" ] ]
Mounting evidence suggests that natural populations can harbor extensive fitness diversity with numerous genomic loci under selection. It is also known that genealogical trees for populations under selection are quantifiably different from those expected under neutral evolution and described statistically by Kingman's coalescent. While differences in the statistical structure of genealogies have long been used as a test for the presence of selection, the full extent of the information that they contain has not been exploited. Here we shall demonstrate that the shape of the reconstructed genealogical tree for a moderately large number of random genomic samples taken from a fitness diverse, but otherwise unstructured asexual population can be used to predict the relative fitness of individuals within the sample. To achieve this we define a heuristic algorithm, which we test in silico using simulations of a Wright-Fisher model for a realistic range of mutation rates and selection strength. Our inferred fitness ranking is based on a linear discriminator which identifies rapidly coalescing lineages in the reconstructed tree. Inferred fitness ranking correlates strongly with actual fitness, with a genome in the top 10% ranked being in the top 20% fittest with false discovery rate of 0.1-0.3 depending on the mutation/selection parameters. The ranking also enables to predict the genotypes that future populations inherit from the present one. While the inference accuracy increases monotonically with sample size, samples of 200 nearly saturate the performance. We propose that our approach can be used for inferring relative fitness of genomes obtained in single-cell sequencing of tumors and in monitoring viral outbreaks.
1907.05184
Robert West Mr
Robert West and Mauro Mobilia
Fixation properties of rock-paper-scissors games in fluctuating populations
31 pages, 15 figures: Main text (18 pages, 9 figures) followed by Supplementary Material (13 pages, 6 figures). Supplementary Information and resources available at https://doi.org/10.6084/m9.figshare.8858273.v1
J. Theor. Biol. 491, 110135 (2020)
10.1016/j.jtbi.2019.110135
null
q-bio.PE cond-mat.stat-mech nlin.AO physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rock-paper-scissors games metaphorically model cyclic dominance in ecology and microbiology. In a static environment, these models are characterized by fixation probabilities obeying two different "laws" in large and small well-mixed populations. Here, we investigate the evolution of these three-species models subject to a randomly switching carrying capacity modeling the endless change between states of resources scarcity and abundance. Focusing mainly on the zero-sum rock-paper-scissors game, equivalent to the cyclic Lotka-Volterra model, we study how the ${\it coupling}$ of demographic and environmental noise influences the fixation properties. More specifically, we investigate which species is the most likely to prevail in a population of fluctuating size and how the outcome depends on the environmental variability. We show that demographic noise coupled with environmental randomness "levels the field" of cyclic competition by balancing the effect of selection. In particular, we show that fast switching effectively reduces the selection intensity proportionally to the variance of the carrying capacity. We determine the conditions under which new fixation scenarios arise, where the most likely species to prevail changes with the rate of switching and the variance of the carrying capacity. Random switching has a limited effect on the mean fixation time that scales linearly with the average population size. Hence, environmental randomness makes the cyclic competition more egalitarian, but does not prolong the species coexistence. We also show how the fixation probabilities of close-to-zero-sum rock-paper-scissors games can be obtained from those of the zero-sum model by rescaling the selection intensity.
[ { "created": "Thu, 11 Jul 2019 13:29:15 GMT", "version": "v1" }, { "created": "Fri, 12 Jul 2019 10:59:05 GMT", "version": "v2" }, { "created": "Mon, 24 Feb 2020 20:39:54 GMT", "version": "v3" } ]
2020-02-26
[ [ "West", "Robert", "" ], [ "Mobilia", "Mauro", "" ] ]
Rock-paper-scissors games metaphorically model cyclic dominance in ecology and microbiology. In a static environment, these models are characterized by fixation probabilities obeying two different "laws" in large and small well-mixed populations. Here, we investigate the evolution of these three-species models subject to a randomly switching carrying capacity modeling the endless change between states of resources scarcity and abundance. Focusing mainly on the zero-sum rock-paper-scissors game, equivalent to the cyclic Lotka-Volterra model, we study how the ${\it coupling}$ of demographic and environmental noise influences the fixation properties. More specifically, we investigate which species is the most likely to prevail in a population of fluctuating size and how the outcome depends on the environmental variability. We show that demographic noise coupled with environmental randomness "levels the field" of cyclic competition by balancing the effect of selection. In particular, we show that fast switching effectively reduces the selection intensity proportionally to the variance of the carrying capacity. We determine the conditions under which new fixation scenarios arise, where the most likely species to prevail changes with the rate of switching and the variance of the carrying capacity. Random switching has a limited effect on the mean fixation time that scales linearly with the average population size. Hence, environmental randomness makes the cyclic competition more egalitarian, but does not prolong the species coexistence. We also show how the fixation probabilities of close-to-zero-sum rock-paper-scissors games can be obtained from those of the zero-sum model by rescaling the selection intensity.
1707.05019
Pietro Faccioli
Fang Wang, Simone Orioli, Alan Ianeselli, Giovanni Spagnolli, Silvio a Beccara, Anne Gershenson, Pietro Faccioli and Patrick L. Wintrode
All-atom simulations reveal how single point mutations promote serpin misfolding
Final version. Supplementary Information included
Biophys. J. 114, 2083 (2018)
10.1016/j.bpj.2018.03.027
null
q-bio.BM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein misfolding is implicated in many diseases, including the serpinopathies. For the canonical inhibitory serpin {\alpha}1-antitrypsin (A1AT), mutations can result in protein deficiencies leading to lung disease, and misfolded mutants can accumulate in hepatocytes leading to liver disease. Using all-atom simulations based on the recently developed Bias Functional algorithm we elucidate how wild-type A1AT folds and how the disease-associated S (Glu264Val) and Z (Glu342Lys) mutations lead to misfolding. The deleterious Z mutation disrupts folding at an early stage, while the relatively benign S mutant shows late stage minor misfolding. A number of suppressor mutations ameliorate the effects of the Z mutation and simulations on these mutants help to elucidate the relative roles of steric clashes and electrostatic interactions in Z misfolding. These results demonstrate a striking correlation between atomistic events and disease severity and shine light on the mechanisms driving chains away from their correct folding routes.
[ { "created": "Mon, 17 Jul 2017 07:07:36 GMT", "version": "v1" }, { "created": "Mon, 18 Jun 2018 10:03:02 GMT", "version": "v2" } ]
2018-06-19
[ [ "Wang", "Fang", "" ], [ "Orioli", "Simone", "" ], [ "Ianeselli", "Alan", "" ], [ "Spagnolli", "Giovanni", "" ], [ "Beccara", "Silvio a", "" ], [ "Gershenson", "Anne", "" ], [ "Faccioli", "Pietro", "" ], [ "Wintrode", "Patrick L.", "" ] ]
Protein misfolding is implicated in many diseases, including the serpinopathies. For the canonical inhibitory serpin {\alpha}1-antitrypsin (A1AT), mutations can result in protein deficiencies leading to lung disease, and misfolded mutants can accumulate in hepatocytes leading to liver disease. Using all-atom simulations based on the recently developed Bias Functional algorithm we elucidate how wild-type A1AT folds and how the disease-associated S (Glu264Val) and Z (Glu342Lys) mutations lead to misfolding. The deleterious Z mutation disrupts folding at an early stage, while the relatively benign S mutant shows late stage minor misfolding. A number of suppressor mutations ameliorate the effects of the Z mutation and simulations on these mutants help to elucidate the relative roles of steric clashes and electrostatic interactions in Z misfolding. These results demonstrate a striking correlation between atomistic events and disease severity and shine light on the mechanisms driving chains away from their correct folding routes.
2202.10919
Nam Nguyen
Nam Nguyen and Kwang-Cheng Chen
Translational Quantum Machine Intelligence for Modeling Tumor Dynamics in Oncology
Withdraw because of error. The error is RY rotation fomular
null
null
null
q-bio.OT quant-ph
http://creativecommons.org/licenses/by/4.0/
Quantifying the dynamics of tumor burden reveals useful information about cancer evolution concerning treatment effects and drug resistance, which play a crucial role in advancing model-informed drug developments (MIDD) towards personalized medicine and precision oncology. The emergence of Quantum Machine Intelligence offers unparalleled insights into tumor dynamics via a quantum mechanics perspective. This paper introduces a novel hybrid quantum-classical neural architecture named $\eta-$Net that enables quantifying quantum dynamics of tumor burden concerning treatment effects. We evaluate our proposed neural solution on two major use cases, including cohort-specific and patient-specific modeling. In silico numerical results show a high capacity and expressivity of $\eta-$Net to the quantified biological problem. Moreover, the close connection to representation learning - the foundation for successes of modern AI, enables efficient transferability of empirical knowledge from relevant cohorts to targeted patients. Finally, we leverage Bayesian optimization to quantify the epistemic uncertainty of model predictions, paving the way for $\eta-$Net towards reliable AI in decision-making for clinical usages.
[ { "created": "Mon, 21 Feb 2022 08:46:58 GMT", "version": "v1" }, { "created": "Fri, 1 Jul 2022 13:20:26 GMT", "version": "v2" }, { "created": "Sat, 7 Jan 2023 13:43:12 GMT", "version": "v3" } ]
2023-01-10
[ [ "Nguyen", "Nam", "" ], [ "Chen", "Kwang-Cheng", "" ] ]
Quantifying the dynamics of tumor burden reveals useful information about cancer evolution concerning treatment effects and drug resistance, which play a crucial role in advancing model-informed drug developments (MIDD) towards personalized medicine and precision oncology. The emergence of Quantum Machine Intelligence offers unparalleled insights into tumor dynamics via a quantum mechanics perspective. This paper introduces a novel hybrid quantum-classical neural architecture named $\eta-$Net that enables quantifying quantum dynamics of tumor burden concerning treatment effects. We evaluate our proposed neural solution on two major use cases, including cohort-specific and patient-specific modeling. In silico numerical results show a high capacity and expressivity of $\eta-$Net to the quantified biological problem. Moreover, the close connection to representation learning - the foundation for successes of modern AI, enables efficient transferability of empirical knowledge from relevant cohorts to targeted patients. Finally, we leverage Bayesian optimization to quantify the epistemic uncertainty of model predictions, paving the way for $\eta-$Net towards reliable AI in decision-making for clinical usages.
q-bio/0601001
William Bialek
William Bialek and Sima Setayeshgar
Cooperativity, sensitivity and noise in biochemical signaling
null
null
null
null
q-bio.MN q-bio.SC
null
Cooperative interactions among the binding of multiple signaling molecules is a common mechanism for enhancing the sensitivity of biological signaling systems. It is widely assumed that this increase in sensitivity of the mean response implies the ability to detect smaller signals. We show that, quite generally, there is a component of the noise in such systems that can be traced to the random arrival of the signaling molecules at their receptor sites, and this diffusive noise is not reduced by cooperativity. Cooperativity makes it easier for real systems to reach this physical limit, but cannot reduce the limit itself.
[ { "created": "Sat, 31 Dec 2005 20:38:39 GMT", "version": "v1" } ]
2007-05-23
[ [ "Bialek", "William", "" ], [ "Setayeshgar", "Sima", "" ] ]
Cooperative interactions among the binding of multiple signaling molecules is a common mechanism for enhancing the sensitivity of biological signaling systems. It is widely assumed that this increase in sensitivity of the mean response implies the ability to detect smaller signals. We show that, quite generally, there is a component of the noise in such systems that can be traced to the random arrival of the signaling molecules at their receptor sites, and this diffusive noise is not reduced by cooperativity. Cooperativity makes it easier for real systems to reach this physical limit, but cannot reduce the limit itself.
0906.3391
David Hochberg
David Hochberg (Centro de Astrobiologia (CSIC-INTA), Madrid, Spain)
Mirror symmetry breaking and restoration: the role of noise and chiral bias
4 pages, 3 figures, to appear in the Physical Review Letters
Physical Review Letters 102, 248101 (2009)
10.1103/PhysRevLett.102.248101
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The nonequilibrium effective potential is computed for the Frank model of spontaneous mirror symmetry breaking (SMSB) in chemistry in which external noise is introduced to account for random environmental effects. When these fluctuations exceed a critical magnitude, mirror symmetry is restored. The competition between ambient noise and the chiral bias due to physical fields and polarized radiation can be explored with this potential.
[ { "created": "Thu, 18 Jun 2009 09:42:08 GMT", "version": "v1" } ]
2009-06-29
[ [ "Hochberg", "David", "", "Centro de Astrobiologia" ] ]
The nonequilibrium effective potential is computed for the Frank model of spontaneous mirror symmetry breaking (SMSB) in chemistry in which external noise is introduced to account for random environmental effects. When these fluctuations exceed a critical magnitude, mirror symmetry is restored. The competition between ambient noise and the chiral bias due to physical fields and polarized radiation can be explored with this potential.
1705.04739
Eilidh Noyes
Eilidh Noyes and Alice J. O'Toole
Face recognition assessments used in the study of super-recognisers
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The purpose of this paper is to provide a brief overview of nine assessments of face processing skills. These tests have been used commonly in recent years to gauge the skills of perspective 'super-recognisers' with respect to the general population. In the literature, a person has been considered to be a 'super-recogniser' based on superior scores on one or more of these tests (cf., Noyes, Phillips & O'Toole, in press). The paper provides a supplement to a recent review of super-recognisers aimed at readers who are unfamiliar with these tests. That review provides a complete summary of the super-recongiser literature to date (2017). It also provides a theory and a set of action points directed at answering the question "What is a super-recogniser?"
[ { "created": "Fri, 12 May 2017 20:02:11 GMT", "version": "v1" } ]
2017-05-16
[ [ "Noyes", "Eilidh", "" ], [ "O'Toole", "Alice J.", "" ] ]
The purpose of this paper is to provide a brief overview of nine assessments of face processing skills. These tests have been used commonly in recent years to gauge the skills of perspective 'super-recognisers' with respect to the general population. In the literature, a person has been considered to be a 'super-recogniser' based on superior scores on one or more of these tests (cf., Noyes, Phillips & O'Toole, in press). The paper provides a supplement to a recent review of super-recognisers aimed at readers who are unfamiliar with these tests. That review provides a complete summary of the super-recongiser literature to date (2017). It also provides a theory and a set of action points directed at answering the question "What is a super-recogniser?"
2103.00335
Alan Rogers
Alan R. Rogers and Stephen P. Wooding
Expectation of the Site Frequency Spectrum
3 pages; 1 figure; no plans to publish elsewhere
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
The site frequency spectrum describes variation among a set of n DNA sequences. Its i'th entry (i=1,2,...,n-1) is the number of nucleotide sites at which the mutant allele is present in i copies. Under selective neutrality, random mating, and constant population size, the expected value of the spectrum is well known but somewhat puzzling. Each additional sequence added to a sample adds an entry to the end of the expected spectrum but does not affect existing entries. This note reviews the reasons for this behavior.
[ { "created": "Sat, 27 Feb 2021 21:44:31 GMT", "version": "v1" } ]
2021-03-02
[ [ "Rogers", "Alan R.", "" ], [ "Wooding", "Stephen P.", "" ] ]
The site frequency spectrum describes variation among a set of n DNA sequences. Its i'th entry (i=1,2,...,n-1) is the number of nucleotide sites at which the mutant allele is present in i copies. Under selective neutrality, random mating, and constant population size, the expected value of the spectrum is well known but somewhat puzzling. Each additional sequence added to a sample adds an entry to the end of the expected spectrum but does not affect existing entries. This note reviews the reasons for this behavior.
1709.02008
Zachary Kilpatrick PhD
Zachary P Kilpatrick and Daniel B Poll
Neural field model of memory-guided search
17 pages, 10 figures
Phys. Rev. E 96, 062411 (2017)
10.1103/PhysRevE.96.062411
null
q-bio.NC nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many organisms can remember locations they have previously visited during a search. Visual search experiments have shown exploration is guided away from these locations, reducing the overlap of the search path before finding a hidden target. We develop and analyze a two-layer neural field model that encodes positional information during a search task. A position-encoding layer sustains a bump attractor corresponding to the searching agent's current location, and search is modeled by velocity input that propagates the bump. A memory layer sustains persistent activity bounded by a wave front, whose edges expand in response to excitatory input from the position layer. Search can then be biased in response to remembered locations, influencing velocity inputs to the position layer. Asymptotic techniques are used to reduce the dynamics of our model to a low-dimensional system of equations that track the bump position and front boundary. Performance is compared for different target-finding tasks.
[ { "created": "Wed, 6 Sep 2017 21:34:57 GMT", "version": "v1" } ]
2017-12-27
[ [ "Kilpatrick", "Zachary P", "" ], [ "Poll", "Daniel B", "" ] ]
Many organisms can remember locations they have previously visited during a search. Visual search experiments have shown exploration is guided away from these locations, reducing the overlap of the search path before finding a hidden target. We develop and analyze a two-layer neural field model that encodes positional information during a search task. A position-encoding layer sustains a bump attractor corresponding to the searching agent's current location, and search is modeled by velocity input that propagates the bump. A memory layer sustains persistent activity bounded by a wave front, whose edges expand in response to excitatory input from the position layer. Search can then be biased in response to remembered locations, influencing velocity inputs to the position layer. Asymptotic techniques are used to reduce the dynamics of our model to a low-dimensional system of equations that track the bump position and front boundary. Performance is compared for different target-finding tasks.
1511.05589
Majid Zerafat Angiz
Mohammad Gholizadeh, Majid Zerafat Angiz L., Seyed Mahmoud Davoodi, Rajab Khalilpour, Anita Talib, Khairun Yahya, Sahubar Ali Nadhar Khan
An alternative aggregate preference ranking algorithm to assess environmental effects on macrobenthic abundance in coastal water
null
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Coastal marine waters are ranked among the most important aquatic ecosystems on earth in terms of ecological and economic significance. Since the Industrial Revolution, human activities have drastically changed coastal marine ecosystems. The development of rules and regulations to protect these ecosystems against human activities requires availability of environmental assessment standard. This necessitates the identification of the key parameters that reflect condition of the coastal water ecosystem. Macrobenthic assemblages are recognized to rapidly respond to changes in the quality of water or habitat. Therefore, it would be useful to study the population of macrobenthos and assess the influential factors on the growth of this species. This study is categorized as multidisciplinary approach which contains two perspectives, ecological and mathematical. In the ecological section, the effect of the water quality parameters (e.g. pH, temperature, dissolved oxygen and salinity) and the sediment characteristics on the macrobenthic abundance is studied. A total of 432 samples were collected and analyzed from four touristic costal locations (at various distances form the coast) of Penang National Park to investigate the spatial change of macrobenthic assemblage. In a mathematical perspective, this paper pursues a new algorithm based on the performance evaluation methods. For this purpose, first, Data Envelopment Analysis (DEA) is employed to evaluate a group of Decision Making Units (DMUs). Consequently, inputs of the mentioned DMUs are considered as alternatives (or candidates), and using a modified DEA model that is categorized as aggregating preference ranking method, the influence of inputs in efficiency of DMUs is investigated.
[ { "created": "Tue, 25 Aug 2015 01:23:43 GMT", "version": "v1" } ]
2015-11-19
[ [ "Gholizadeh", "Mohammad", "" ], [ "L.", "Majid Zerafat Angiz", "" ], [ "Davoodi", "Seyed Mahmoud", "" ], [ "Khalilpour", "Rajab", "" ], [ "Talib", "Anita", "" ], [ "Yahya", "Khairun", "" ], [ "Khan", "Sahubar Ali Nadhar", "" ] ]
Coastal marine waters are ranked among the most important aquatic ecosystems on earth in terms of ecological and economic significance. Since the Industrial Revolution, human activities have drastically changed coastal marine ecosystems. The development of rules and regulations to protect these ecosystems against human activities requires availability of environmental assessment standard. This necessitates the identification of the key parameters that reflect condition of the coastal water ecosystem. Macrobenthic assemblages are recognized to rapidly respond to changes in the quality of water or habitat. Therefore, it would be useful to study the population of macrobenthos and assess the influential factors on the growth of this species. This study is categorized as multidisciplinary approach which contains two perspectives, ecological and mathematical. In the ecological section, the effect of the water quality parameters (e.g. pH, temperature, dissolved oxygen and salinity) and the sediment characteristics on the macrobenthic abundance is studied. A total of 432 samples were collected and analyzed from four touristic costal locations (at various distances form the coast) of Penang National Park to investigate the spatial change of macrobenthic assemblage. In a mathematical perspective, this paper pursues a new algorithm based on the performance evaluation methods. For this purpose, first, Data Envelopment Analysis (DEA) is employed to evaluate a group of Decision Making Units (DMUs). Consequently, inputs of the mentioned DMUs are considered as alternatives (or candidates), and using a modified DEA model that is categorized as aggregating preference ranking method, the influence of inputs in efficiency of DMUs is investigated.
2310.00062
Brian Camley
Aparajita Kashyap, Wei Wang, and Brian A. Camley
Tradeoffs in concentration sensing in dynamic environments
null
null
null
null
q-bio.CB physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When cells measure concentrations of chemical signals, they may average multiple measurements over time in order to reduce noise in their measurements. However, when cells are in a environment that changes over time, past measurements may not reflect current conditions - creating a new source of error that trades off against noise in chemical sensing. What statistics in the cell's environment control this tradeoff? What properties of the environment make it variable enough that this tradeoff is relevant? We model a single eukaryotic cell sensing a chemical secreted from bacteria (e.g. folic acid). In this case, the environment changes because the bacteria swim - leading to changes in the true concentration at the cell. We develop analytical calculations and stochastic simulations of sensing in this environment. We find that cells can have a huge variety of optimal sensing strategies, ranging from not time averaging at all, to averaging over an arbitrarily long time, or having a finite optimal averaging time. The factors that primarily control the ideal averaging are the ratio of sensing noise to environmental variation, and the ratio of timescales of sensing to the timescale of environmental variation. Sensing noise depends on the receptor-ligand kinetics, while the environmental variation depends on the density of bacteria and the degradation and diffusion properties of the secreted chemoattractant. Our results suggest that fluctuating environmental concentrations may be a relevant source of noise even in a relatively static environment.
[ { "created": "Fri, 29 Sep 2023 18:09:19 GMT", "version": "v1" } ]
2023-10-03
[ [ "Kashyap", "Aparajita", "" ], [ "Wang", "Wei", "" ], [ "Camley", "Brian A.", "" ] ]
When cells measure concentrations of chemical signals, they may average multiple measurements over time in order to reduce noise in their measurements. However, when cells are in a environment that changes over time, past measurements may not reflect current conditions - creating a new source of error that trades off against noise in chemical sensing. What statistics in the cell's environment control this tradeoff? What properties of the environment make it variable enough that this tradeoff is relevant? We model a single eukaryotic cell sensing a chemical secreted from bacteria (e.g. folic acid). In this case, the environment changes because the bacteria swim - leading to changes in the true concentration at the cell. We develop analytical calculations and stochastic simulations of sensing in this environment. We find that cells can have a huge variety of optimal sensing strategies, ranging from not time averaging at all, to averaging over an arbitrarily long time, or having a finite optimal averaging time. The factors that primarily control the ideal averaging are the ratio of sensing noise to environmental variation, and the ratio of timescales of sensing to the timescale of environmental variation. Sensing noise depends on the receptor-ligand kinetics, while the environmental variation depends on the density of bacteria and the degradation and diffusion properties of the secreted chemoattractant. Our results suggest that fluctuating environmental concentrations may be a relevant source of noise even in a relatively static environment.
2306.14667
Quang Dang Nguyen
Quang Dang Nguyen, Sheryl L. Chang, Christina M. Jamerlan, Mikhail Prokopenko
Measuring unequal distribution of pandemic severity across census years, variants of concern and interventions
43 pages, 25 figures, source code: https://zenodo.org/record/5778218
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diverse and complex intervention policies deployed over the last years have shown varied effectiveness in controlling the COVID-19 pandemic. However, a systematic analysis and modelling of the combined effects of different viral lineages and complex intervention policies remains a challenge. Using large-scale agent-based modelling and a high-resolution computational simulation matching census-based demographics of Australia, we carried out a systematic comparative analysis of several COVID-19 pandemic scenarios. The scenarios covered two most recent Australian census years (2016 and 2021), three variants of concern (ancestral, Delta and Omicron), and five representative intervention policies. In addition, we introduced pandemic Lorenz curves measuring an unequal distribution of the pandemic severity across local areas. We quantified nonlinear effects of population heterogeneity on the pandemic severity, highlighting that (i) the population growth amplifies pandemic peaks, (ii) the changes in population size amplify the peak incidence more than the changes in density, and (iii) the pandemic severity is distributed unequally across local areas. We also examined and delineated the effects of urbanisation on the incidence bimodality, distinguishing between urban and regional pandemic waves. Finally, we quantified and examined the impact of school closures, complemented by partial interventions, and identified the conditions when inclusion of school closures may decisively control the transmission. Our results suggest that (a) public health response to long-lasting pandemics must be frequently reviewed and adapted to demographic changes, (b) in order to control recurrent waves, mass-vaccination rollouts need to be complemented by partial NPIs, and (c) healthcare and vaccination resources need to be prioritised towards the localities and regions with high population growth and/or high density.
[ { "created": "Mon, 26 Jun 2023 13:01:21 GMT", "version": "v1" } ]
2023-06-27
[ [ "Nguyen", "Quang Dang", "" ], [ "Chang", "Sheryl L.", "" ], [ "Jamerlan", "Christina M.", "" ], [ "Prokopenko", "Mikhail", "" ] ]
Diverse and complex intervention policies deployed over the last years have shown varied effectiveness in controlling the COVID-19 pandemic. However, a systematic analysis and modelling of the combined effects of different viral lineages and complex intervention policies remains a challenge. Using large-scale agent-based modelling and a high-resolution computational simulation matching census-based demographics of Australia, we carried out a systematic comparative analysis of several COVID-19 pandemic scenarios. The scenarios covered two most recent Australian census years (2016 and 2021), three variants of concern (ancestral, Delta and Omicron), and five representative intervention policies. In addition, we introduced pandemic Lorenz curves measuring an unequal distribution of the pandemic severity across local areas. We quantified nonlinear effects of population heterogeneity on the pandemic severity, highlighting that (i) the population growth amplifies pandemic peaks, (ii) the changes in population size amplify the peak incidence more than the changes in density, and (iii) the pandemic severity is distributed unequally across local areas. We also examined and delineated the effects of urbanisation on the incidence bimodality, distinguishing between urban and regional pandemic waves. Finally, we quantified and examined the impact of school closures, complemented by partial interventions, and identified the conditions when inclusion of school closures may decisively control the transmission. Our results suggest that (a) public health response to long-lasting pandemics must be frequently reviewed and adapted to demographic changes, (b) in order to control recurrent waves, mass-vaccination rollouts need to be complemented by partial NPIs, and (c) healthcare and vaccination resources need to be prioritised towards the localities and regions with high population growth and/or high density.
1602.06466
Roland Langrock
Vianey Leos-Barajas, Theoni Photopoulou, Roland Langrock, Toby A. Patterson, Yuuki Watanabe, Megan Murgatroyd, Yannis P. Papastamatiou
Analysis of animal accelerometer data using hidden Markov models
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Use of accelerometers is now widespread within animal biotelemetry as they provide a means of measuring an animal's activity in a meaningful and quantitative way where direct observation is not possible. In sequential acceleration data there is a natural dependence between observations of movement or behaviour, a fact that has been largely ignored in most analyses. Analyses of acceleration data where serial dependence has been explicitly modelled have largely relied on hidden Markov models (HMMs). Depending on the aim of an analysis, either a supervised or an unsupervised learning approach can be applied. Under a supervised context, an HMM is trained to classify unlabelled acceleration data into a finite set of pre-specified categories, whereas we will demonstrate how an unsupervised learning approach can be used to infer new aspects of animal behaviour. We will provide the details necessary to implement and assess an HMM in both the supervised and unsupervised context, and discuss the data requirements of each case. We outline two applications to marine and aerial systems (sharks and eagles) taking the unsupervised approach, which is more readily applicable to animal activity measured in the field. HMMs were used to infer the effects of temporal, atmospheric and tidal inputs on animal behaviour. Animal accelerometer data allow ecologists to identify important correlates and drivers of animal activity (and hence behaviour). The HMM framework is well suited to deal with the main features commonly observed in accelerometer data. The ability to combine direct observations of animals activity and combine it with statistical models which account for the features of accelerometer data offer a new way to quantify animal behaviour, energetic expenditure and deepen our insights into individual behaviour as a constituent of populations and ecosystems.
[ { "created": "Sat, 20 Feb 2016 21:41:27 GMT", "version": "v1" } ]
2016-02-23
[ [ "Leos-Barajas", "Vianey", "" ], [ "Photopoulou", "Theoni", "" ], [ "Langrock", "Roland", "" ], [ "Patterson", "Toby A.", "" ], [ "Watanabe", "Yuuki", "" ], [ "Murgatroyd", "Megan", "" ], [ "Papastamatiou", "Yannis P.", "" ] ]
Use of accelerometers is now widespread within animal biotelemetry as they provide a means of measuring an animal's activity in a meaningful and quantitative way where direct observation is not possible. In sequential acceleration data there is a natural dependence between observations of movement or behaviour, a fact that has been largely ignored in most analyses. Analyses of acceleration data where serial dependence has been explicitly modelled have largely relied on hidden Markov models (HMMs). Depending on the aim of an analysis, either a supervised or an unsupervised learning approach can be applied. Under a supervised context, an HMM is trained to classify unlabelled acceleration data into a finite set of pre-specified categories, whereas we will demonstrate how an unsupervised learning approach can be used to infer new aspects of animal behaviour. We will provide the details necessary to implement and assess an HMM in both the supervised and unsupervised context, and discuss the data requirements of each case. We outline two applications to marine and aerial systems (sharks and eagles) taking the unsupervised approach, which is more readily applicable to animal activity measured in the field. HMMs were used to infer the effects of temporal, atmospheric and tidal inputs on animal behaviour. Animal accelerometer data allow ecologists to identify important correlates and drivers of animal activity (and hence behaviour). The HMM framework is well suited to deal with the main features commonly observed in accelerometer data. The ability to combine direct observations of animals activity and combine it with statistical models which account for the features of accelerometer data offer a new way to quantify animal behaviour, energetic expenditure and deepen our insights into individual behaviour as a constituent of populations and ecosystems.
0906.1472
Kay J\"org Wiese
Francois David and Kay Joerg Wiese
Field Theory of the RNA Freezing Transition
96 pages, 188 figures. v2: minor corrections
J. Stat. Mech. (2009) P10019
10.1088/1742-5468/2009/10/P10019
LPTENS 09/18, t09/074
q-bio.BM cond-mat.dis-nn q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Folding of RNA is subject to a competition between entropy, relevant at high temperatures, and the random, or random looking, sequence, determining the low- temperature phase. It is known from numerical simulations that for random as well as biological sequences, high- and low-temperature phases are different, e.g. the exponent rho describing the pairing probability between two bases is rho = 3/2 in the high-temperature phase, and approximatively 4/3 in the low-temperature (glass) phase. Here, we present, for random sequences, a field theory of the phase transition separating high- and low-temperature phases. We establish the existence of the latter by showing that the underlying theory is renormalizable to all orders in perturbation theory. We test this result via an explicit 2-loop calculation, which yields rho approximatively 1.36 at the transition, as well as diverse other critical exponents, including the response to an applied external force (denaturation transition).
[ { "created": "Mon, 8 Jun 2009 11:45:54 GMT", "version": "v1" }, { "created": "Tue, 16 Jun 2009 17:12:22 GMT", "version": "v2" } ]
2015-05-13
[ [ "David", "Francois", "" ], [ "Wiese", "Kay Joerg", "" ] ]
Folding of RNA is subject to a competition between entropy, relevant at high temperatures, and the random, or random looking, sequence, determining the low- temperature phase. It is known from numerical simulations that for random as well as biological sequences, high- and low-temperature phases are different, e.g. the exponent rho describing the pairing probability between two bases is rho = 3/2 in the high-temperature phase, and approximatively 4/3 in the low-temperature (glass) phase. Here, we present, for random sequences, a field theory of the phase transition separating high- and low-temperature phases. We establish the existence of the latter by showing that the underlying theory is renormalizable to all orders in perturbation theory. We test this result via an explicit 2-loop calculation, which yields rho approximatively 1.36 at the transition, as well as diverse other critical exponents, including the response to an applied external force (denaturation transition).
1106.4880
Ying Ding
Qian Zhu, Yuyin Sun, Sashikiran Challa, Ying Ding, Michael S. Lajiness, David J. Wild
Semantic Inference using Chemogenomics Data for Drug Discovery
23 pages, 9 figures, 4 tables
null
10.1186/1471-2105-12-256
null
q-bio.QM cs.DL cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background Semantic Web Technology (SWT) makes it possible to integrate and search the large volume of life science datasets in the public domain, as demonstrated by well-known linked data projects such as LODD, Bio2RDF, and Chem2Bio2RDF. Integration of these sets creates large networks of information. We have previously described a tool called WENDI for aggregating information pertaining to new chemical compounds, effectively creating evidence paths relating the compounds to genes, diseases and so on. In this paper we examine the utility of automatically inferring new compound-disease associations (and thus new links in the network) based on semantically marked-up versions of these evidence paths, rule-sets and inference engines. Results Through the implementation of a semantic inference algorithm, rule set, Semantic Web methods (RDF, OWL and SPARQL) and new interfaces, we have created a new tool called Chemogenomic Explorer that uses networks of ontologically annotated RDF statements along with deductive reasoning tools to infer new associations between the query structure and genes and diseases from WENDI results. The tool then permits interactive clustering and filtering of these evidence paths. Conclusions We present a new aggregate approach to inferring links between chemical compounds and diseases using semantic inference. This approach allows multiple evidence paths between compounds and diseases to be identified using a rule-set and semantically annotated data, and for these evidence paths to be clustered to show overall evidence linking the compound to a disease. We believe this is a powerful approach, because it allows compound-disease relationships to be ranked by the amount of evidence supporting them.
[ { "created": "Fri, 24 Jun 2011 03:21:56 GMT", "version": "v1" } ]
2011-06-27
[ [ "Zhu", "Qian", "" ], [ "Sun", "Yuyin", "" ], [ "Challa", "Sashikiran", "" ], [ "Ding", "Ying", "" ], [ "Lajiness", "Michael S.", "" ], [ "Wild", "David J.", "" ] ]
Background Semantic Web Technology (SWT) makes it possible to integrate and search the large volume of life science datasets in the public domain, as demonstrated by well-known linked data projects such as LODD, Bio2RDF, and Chem2Bio2RDF. Integration of these sets creates large networks of information. We have previously described a tool called WENDI for aggregating information pertaining to new chemical compounds, effectively creating evidence paths relating the compounds to genes, diseases and so on. In this paper we examine the utility of automatically inferring new compound-disease associations (and thus new links in the network) based on semantically marked-up versions of these evidence paths, rule-sets and inference engines. Results Through the implementation of a semantic inference algorithm, rule set, Semantic Web methods (RDF, OWL and SPARQL) and new interfaces, we have created a new tool called Chemogenomic Explorer that uses networks of ontologically annotated RDF statements along with deductive reasoning tools to infer new associations between the query structure and genes and diseases from WENDI results. The tool then permits interactive clustering and filtering of these evidence paths. Conclusions We present a new aggregate approach to inferring links between chemical compounds and diseases using semantic inference. This approach allows multiple evidence paths between compounds and diseases to be identified using a rule-set and semantically annotated data, and for these evidence paths to be clustered to show overall evidence linking the compound to a disease. We believe this is a powerful approach, because it allows compound-disease relationships to be ranked by the amount of evidence supporting them.
1810.11594
Brian Hu
Brian Hu and Stefan Mihalas
Convolutional neural networks with extra-classical receptive fields
null
null
null
null
q-bio.NC cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Convolutional neural networks (CNNs) have had great success in many real-world applications and have also been used to model visual processing in the brain. However, these networks are quite brittle - small changes in the input image can dramatically change a network's output prediction. In contrast to what is known from biology, these networks largely rely on feedforward connections, ignoring the influence of recurrent connections. They also focus on supervised rather than unsupervised learning. To address these issues, we combine traditional supervised learning via backpropagation with a specialized unsupervised learning rule to learn lateral connections between neurons within a convolutional neural network. These connections have been shown to optimally integrate information from the surround, generating extra-classical receptive fields for the neurons in our new proposed model (CNNEx). Models with optimal lateral connections are more robust to noise and achieve better performance on noisy versions of the MNIST and CIFAR-10 datasets. Resistance to noise can be further improved by combining our model with additional regularization techniques such as dropout and weight decay. Although the image statistics of MNIST and CIFAR-10 differ greatly, the same unsupervised learning rule generalized to both datasets. Our results demonstrate the potential usefulness of combining supervised and unsupervised learning techniques and suggest that the integration of lateral connections into convolutional neural networks is an important area of future research.
[ { "created": "Sat, 27 Oct 2018 04:15:50 GMT", "version": "v1" } ]
2018-10-30
[ [ "Hu", "Brian", "" ], [ "Mihalas", "Stefan", "" ] ]
Convolutional neural networks (CNNs) have had great success in many real-world applications and have also been used to model visual processing in the brain. However, these networks are quite brittle - small changes in the input image can dramatically change a network's output prediction. In contrast to what is known from biology, these networks largely rely on feedforward connections, ignoring the influence of recurrent connections. They also focus on supervised rather than unsupervised learning. To address these issues, we combine traditional supervised learning via backpropagation with a specialized unsupervised learning rule to learn lateral connections between neurons within a convolutional neural network. These connections have been shown to optimally integrate information from the surround, generating extra-classical receptive fields for the neurons in our new proposed model (CNNEx). Models with optimal lateral connections are more robust to noise and achieve better performance on noisy versions of the MNIST and CIFAR-10 datasets. Resistance to noise can be further improved by combining our model with additional regularization techniques such as dropout and weight decay. Although the image statistics of MNIST and CIFAR-10 differ greatly, the same unsupervised learning rule generalized to both datasets. Our results demonstrate the potential usefulness of combining supervised and unsupervised learning techniques and suggest that the integration of lateral connections into convolutional neural networks is an important area of future research.
1908.01917
Louxin Zhang
Gabriel Cardona, Louxin Zhang
Counting Tree-Child Networks and Their Subclasses
24 pages, 2 tables and 9 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Galled trees are studied as a recombination model in population genetics. This class of phylogenetic networks is generalized into tree-child, galled and reticulation-visible network classes by relaxing a structural condition imposed on galled trees. We count tree-child networks through enumerating their component graphs. Explicit counting formulas are also given for galled trees through their relationship to ordered trees, phylogenetic networks with few reticulations and phylogenetic networks in which the child of each reticulation is a leaf.
[ { "created": "Tue, 6 Aug 2019 01:07:43 GMT", "version": "v1" }, { "created": "Wed, 7 Aug 2019 23:25:55 GMT", "version": "v2" }, { "created": "Wed, 26 Feb 2020 21:58:09 GMT", "version": "v3" } ]
2020-02-28
[ [ "Cardona", "Gabriel", "" ], [ "Zhang", "Louxin", "" ] ]
Galled trees are studied as a recombination model in population genetics. This class of phylogenetic networks is generalized into tree-child, galled and reticulation-visible network classes by relaxing a structural condition imposed on galled trees. We count tree-child networks through enumerating their component graphs. Explicit counting formulas are also given for galled trees through their relationship to ordered trees, phylogenetic networks with few reticulations and phylogenetic networks in which the child of each reticulation is a leaf.
1508.03026
David Murrugarra
David Murrugarra and Elena S. Dimitrova
Molecular Network Control Through Boolean Canalization
null
EURASIP Journal on Bioinformatics and Systems Biology, 2015:9, 2015
10.1186/s13637-015-0029-2
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Boolean networks are an important class of computational models for molecular interaction networks. Boolean canalization, a type of hierarchical clustering of the inputs of a Boolean function, has been extensively studied in the context of network modeling where each layer of canalization adds a degree of stability in the dynamics of the network. Recently, dynamic network control approaches have been used for the design of new therapeutic interventions and for other applications such as stem cell reprogramming. This work studies the role of canalization in the control of Boolean molecular networks. It provides a method for identifying the potential edges to control in the wiring diagram of a network for avoiding undesirable state transitions. The method is based on identifying appropriate input-output combinations on undesirable transitions that can be modified using the edges in the wiring diagram of the network. Moreover, a method for estimating the number of changed transitions in the state space of the system as a result of an edge deletion in the wiring diagram is presented. The control methods of this paper were applied to a mutated cell-cycle model and to a p53-mdm2 model to identify potential control targets.
[ { "created": "Wed, 12 Aug 2015 18:49:12 GMT", "version": "v1" }, { "created": "Sun, 25 Oct 2015 00:27:17 GMT", "version": "v2" } ]
2024-07-09
[ [ "Murrugarra", "David", "" ], [ "Dimitrova", "Elena S.", "" ] ]
Boolean networks are an important class of computational models for molecular interaction networks. Boolean canalization, a type of hierarchical clustering of the inputs of a Boolean function, has been extensively studied in the context of network modeling where each layer of canalization adds a degree of stability in the dynamics of the network. Recently, dynamic network control approaches have been used for the design of new therapeutic interventions and for other applications such as stem cell reprogramming. This work studies the role of canalization in the control of Boolean molecular networks. It provides a method for identifying the potential edges to control in the wiring diagram of a network for avoiding undesirable state transitions. The method is based on identifying appropriate input-output combinations on undesirable transitions that can be modified using the edges in the wiring diagram of the network. Moreover, a method for estimating the number of changed transitions in the state space of the system as a result of an edge deletion in the wiring diagram is presented. The control methods of this paper were applied to a mutated cell-cycle model and to a p53-mdm2 model to identify potential control targets.
2005.10598
Peter Thomas PhD
Shusen Pu and Peter J. Thomas
Fast and Accurate Langevin Simulations of Stochastic Hodgkin-Huxley Dynamics
55 pages, 9 figures
null
10.1162/neco_a_01312
null
q-bio.NC cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fox and Lu introduced a Langevin framework for discrete-time stochastic models of randomly gated ion channels such as the Hodgkin-Huxley (HH) system. They derived a Fokker-Planck equation with state-dependent diffusion tensor $D$ and suggested a Langevin formulation with noise coefficient matrix $S$ such that $SS^\intercal=D$. Subsequently, several authors introduced a variety of Langevin equations for the HH system. In this paper, we present a natural 14-dimensional dynamics for the HH system in which each \emph{directed} edge in the ion channel state transition graph acts as an independent noise source, leading to a $14\times 28$ noise coefficient matrix $S$. We show that (i) the corresponding 14D system of ordinary differential \rev{equations} is consistent with the classical 4D representation of the HH system; (ii) the 14D representation leads to a noise coefficient matrix $S$ that can be obtained cheaply on each timestep, without requiring a matrix decomposition; (iii) sample trajectories of the 14D representation are pathwise equivalent to trajectories of Fox and Lu's system, as well as trajectories of several existing Langevin models; (iv) our 14D representation (and those equivalent to it) give the most accurate interspike-interval distribution, not only with respect to moments but under both the $L_1$ and $L_\infty$ metric-space norms; and (v) the 14D representation gives an approximation to exact Markov chain simulations that are as fast and as efficient as all equivalent models. Our approach goes beyond existing models, in that it supports a stochastic shielding decomposition that dramatically simplifies $S$ with minimal loss of accuracy under both voltage- and current-clamp conditions.
[ { "created": "Thu, 21 May 2020 12:19:56 GMT", "version": "v1" } ]
2020-11-18
[ [ "Pu", "Shusen", "" ], [ "Thomas", "Peter J.", "" ] ]
Fox and Lu introduced a Langevin framework for discrete-time stochastic models of randomly gated ion channels such as the Hodgkin-Huxley (HH) system. They derived a Fokker-Planck equation with state-dependent diffusion tensor $D$ and suggested a Langevin formulation with noise coefficient matrix $S$ such that $SS^\intercal=D$. Subsequently, several authors introduced a variety of Langevin equations for the HH system. In this paper, we present a natural 14-dimensional dynamics for the HH system in which each \emph{directed} edge in the ion channel state transition graph acts as an independent noise source, leading to a $14\times 28$ noise coefficient matrix $S$. We show that (i) the corresponding 14D system of ordinary differential \rev{equations} is consistent with the classical 4D representation of the HH system; (ii) the 14D representation leads to a noise coefficient matrix $S$ that can be obtained cheaply on each timestep, without requiring a matrix decomposition; (iii) sample trajectories of the 14D representation are pathwise equivalent to trajectories of Fox and Lu's system, as well as trajectories of several existing Langevin models; (iv) our 14D representation (and those equivalent to it) give the most accurate interspike-interval distribution, not only with respect to moments but under both the $L_1$ and $L_\infty$ metric-space norms; and (v) the 14D representation gives an approximation to exact Markov chain simulations that are as fast and as efficient as all equivalent models. Our approach goes beyond existing models, in that it supports a stochastic shielding decomposition that dramatically simplifies $S$ with minimal loss of accuracy under both voltage- and current-clamp conditions.
2307.01499
Roozbeh Farhoodi
Roozbeh Farhoodi, Phil Wilkes, Anirudh M. Natarajan, Samantha Ing-Esteves, Julie L. Lefebvre, Mathias Disney, Konrad P. Kording
Comparing dendritic trees with actual trees
null
null
null
null
q-bio.NC q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Since they became observable, neuron morphologies have been informally compared with biological trees but they are studied by distinct communities, neuroscientists, and ecologists. The apparent structural similarity suggests there may be common quantitative rules and constraints. However, there are also reasons to believe they should be different. For example, while the environments of trees may be relatively simple, neurons are constructed by a complex iterative program where synapses are made and pruned. This complexity may make neurons less self-similar than trees. Here we test this hypothesis by comparing the features of segmented sub-trees with those of the whole tree. We indeed find more self-similarity within actual trees than neurons. At the same time, we find that many other features are somewhat comparable across the two. Investigation of shapes and behaviors promises new ways of conceptualizing the form-function link.
[ { "created": "Tue, 4 Jul 2023 06:12:28 GMT", "version": "v1" } ]
2023-07-06
[ [ "Farhoodi", "Roozbeh", "" ], [ "Wilkes", "Phil", "" ], [ "Natarajan", "Anirudh M.", "" ], [ "Ing-Esteves", "Samantha", "" ], [ "Lefebvre", "Julie L.", "" ], [ "Disney", "Mathias", "" ], [ "Kording", "Konrad P.", "" ] ]
Since they became observable, neuron morphologies have been informally compared with biological trees but they are studied by distinct communities, neuroscientists, and ecologists. The apparent structural similarity suggests there may be common quantitative rules and constraints. However, there are also reasons to believe they should be different. For example, while the environments of trees may be relatively simple, neurons are constructed by a complex iterative program where synapses are made and pruned. This complexity may make neurons less self-similar than trees. Here we test this hypothesis by comparing the features of segmented sub-trees with those of the whole tree. We indeed find more self-similarity within actual trees than neurons. At the same time, we find that many other features are somewhat comparable across the two. Investigation of shapes and behaviors promises new ways of conceptualizing the form-function link.
1312.3926
Nicholas Dulvy
Nicholas K. Dulvy, Sarah L. Fowler, John A. Musick, Rachel D. Cavanagh, Peter M. Kyne, Lucy R. Harrison, John K. Carlson, Lindsay N. K. Davidson, Sonja V. Fordham, Malcolm P. Francis, Caroline M. Pollock, Colin A. Simpfendorfer, George H. Burgess, Kent E. Carpenter, Leonard J. V. Compagno, David A. Ebert, Claudine Gibson, Michelle R. Heupel, Suzanne R. Livingstone, Jonnell C. Sanciangco, John D. Stevens, Sarah Valenti, William T. White
Extinction risk and conservation of the world's sharks and rays
Accepted for publication in eLIFE on 5th December 2013. 83 pages, 9 tables, 10 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/3.0/
The rapid expansion of human activities threatens ocean-wide biodiversity loss. Numerous marine animal populations have declined, yet it remains unclear whether these trends are symptomatic of a chronic accumulation of global marine extinction risk. We present the first systematic analysis of threat for a globally-distributed lineage of 1,041 chondrichthyan fishes - sharks, rays, and chimaeras. We estimate that one-quarter are threatened according to IUCN Red List criteria due to overfishing (targeted and incidental). Large-bodied, shallow-water species are at greatest risk and five out of the seven most threatened families are rays. Overall chondrichthyan extinction risk is substantially higher than for most other vertebrates, and only one-third of species are considered safe. Population depletion has occurred throughout the world's ice-free waters, but is particularly prevalent in the Indo-Pacific Biodiversity Triangle and Mediterranean Sea. Improved management of fisheries and trade is urgently needed to avoid extinctions and promote population recovery.
[ { "created": "Thu, 12 Dec 2013 18:03:52 GMT", "version": "v1" } ]
2013-12-16
[ [ "Dulvy", "Nicholas K.", "" ], [ "Fowler", "Sarah L.", "" ], [ "Musick", "John A.", "" ], [ "Cavanagh", "Rachel D.", "" ], [ "Kyne", "Peter M.", "" ], [ "Harrison", "Lucy R.", "" ], [ "Carlson", "John K.", "" ], [ "Davidson", "Lindsay N. K.", "" ], [ "Fordham", "Sonja V.", "" ], [ "Francis", "Malcolm P.", "" ], [ "Pollock", "Caroline M.", "" ], [ "Simpfendorfer", "Colin A.", "" ], [ "Burgess", "George H.", "" ], [ "Carpenter", "Kent E.", "" ], [ "Compagno", "Leonard J. V.", "" ], [ "Ebert", "David A.", "" ], [ "Gibson", "Claudine", "" ], [ "Heupel", "Michelle R.", "" ], [ "Livingstone", "Suzanne R.", "" ], [ "Sanciangco", "Jonnell C.", "" ], [ "Stevens", "John D.", "" ], [ "Valenti", "Sarah", "" ], [ "White", "William T.", "" ] ]
The rapid expansion of human activities threatens ocean-wide biodiversity loss. Numerous marine animal populations have declined, yet it remains unclear whether these trends are symptomatic of a chronic accumulation of global marine extinction risk. We present the first systematic analysis of threat for a globally-distributed lineage of 1,041 chondrichthyan fishes - sharks, rays, and chimaeras. We estimate that one-quarter are threatened according to IUCN Red List criteria due to overfishing (targeted and incidental). Large-bodied, shallow-water species are at greatest risk and five out of the seven most threatened families are rays. Overall chondrichthyan extinction risk is substantially higher than for most other vertebrates, and only one-third of species are considered safe. Population depletion has occurred throughout the world's ice-free waters, but is particularly prevalent in the Indo-Pacific Biodiversity Triangle and Mediterranean Sea. Improved management of fisheries and trade is urgently needed to avoid extinctions and promote population recovery.
1509.09285
Mallenahalli Naresh Kumar Prof. Dr.
C. S. Murthy, M. V.R. Sesha Sai, M. Naresh Kumar, P. S. Roy
Temporal divergence in cropping pattern and its implications on geospatial drought assessment
9 pages, 14 figures
Geocarto International 10/2009, 24(5):377-395
10.1080/10106040802601037
null
q-bio.QM physics.geo-ph q-bio.PE
http://creativecommons.org/licenses/by-sa/4.0/
Time series data on cropping pattern at disaggregated level were analysed and its implications on geospatial drought assessment were demonstrated. An index of Cropping Pattern Dissimilarity (CP-DI) between a pair of years, developed in this study, proved that the cropping pattern of a year has a higher degree of similarity with that of recent past years only and tends to be dissimilar with longer time difference. The temporal divergence in cropping pattern has direct implications on geospatial approach of drought assessment, in which, time series NDVI data are compared for drought interpretation. It was found that, seasonal NDVI profiles of drought year and normal year did not show any anomaly when the cropping patterns were dissimilar and two normal years having dissimilar cropping pattern showed different NDVI profiles. Therefore, it is suggested that such temporal comparisons of NDVI are better restricted to recent past years to achieve more objective interpretation.
[ { "created": "Sat, 26 Sep 2015 14:25:35 GMT", "version": "v1" } ]
2016-10-31
[ [ "Murthy", "C. S.", "" ], [ "Sai", "M. V. R. Sesha", "" ], [ "Kumar", "M. Naresh", "" ], [ "Roy", "P. S.", "" ] ]
Time series data on cropping pattern at disaggregated level were analysed and its implications on geospatial drought assessment were demonstrated. An index of Cropping Pattern Dissimilarity (CP-DI) between a pair of years, developed in this study, proved that the cropping pattern of a year has a higher degree of similarity with that of recent past years only and tends to be dissimilar with longer time difference. The temporal divergence in cropping pattern has direct implications on geospatial approach of drought assessment, in which, time series NDVI data are compared for drought interpretation. It was found that, seasonal NDVI profiles of drought year and normal year did not show any anomaly when the cropping patterns were dissimilar and two normal years having dissimilar cropping pattern showed different NDVI profiles. Therefore, it is suggested that such temporal comparisons of NDVI are better restricted to recent past years to achieve more objective interpretation.
2202.04501
Hossein Nemati
Hossein Nemati (1), Mohammad Reza Ejtehadi (1), Kamran Kaveh (2) ((1) Sharif University of Technology, Physics Department (2) University of Washington, Department of Applied Mathematics)
Counterintuitive properties of fixation probability and fixation time in population structures with spatially periodic resource distribution
28 pages, 14 figures (10 in main text and 4 in appendices). Corresponding authors: Kamran Kaveh(kkaveh@uw.edu, kkavehma@gmail.com) and Mohammad Reza Ejtehadi(ejtehadi@sharif.edu)
null
null
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Resource are often not uniformly distributed within a population. Spatial variations of concentration of a resource, change the fitness of competing strategies locally. The notion of fitness varying with respect to both genotype and environment is important in modeling cancer initiation, microbial evolution and evolution of drug resistance. Environmental interactions can be asymmetric, that is, they affect the fitness of one type more than the other. The question is how local environmental variations in network population structures change the selection dynamics in a finite population setting. We consider one-dimensional lattice population structures with spatial fitness distributions with a periodic pattern. Heterogeneity is determined by standard deviation of fitnesses and period. The model covers biologically relevant limits of two-habitat subdivided populations and randomly-distributed resources in high- and low-periods. We numerically calculate fixation probability and fixation times for a constant population birth-death process as fitness heterogeneity and period vary. We identify levels of heterogeneity for which a previously deleterious mutant, in a uniform environment, becomes beneficial. In other regimes of the problem we observe unexpected behavior where the fixation probability of both types are larger than their neutral value at the same time. This coincides with an exponential increase in time to fixation as a function of population size, which points to significant slow-down in selection process and the potential for coexistence between types in realistic time scales. We also discuss `fitness shift' model where the fitness function of one type is identical to the other up to a constant spatial shift. This leads to significant increase (or decrease) in the fixation probability of the mutant depending the value of the shift.
[ { "created": "Wed, 9 Feb 2022 15:03:18 GMT", "version": "v1" }, { "created": "Wed, 6 Apr 2022 18:24:38 GMT", "version": "v2" } ]
2022-04-08
[ [ "Nemati", "Hossein", "" ], [ "Ejtehadi", "Mohammad Reza", "" ], [ "Kaveh", "Kamran", "" ] ]
Resource are often not uniformly distributed within a population. Spatial variations of concentration of a resource, change the fitness of competing strategies locally. The notion of fitness varying with respect to both genotype and environment is important in modeling cancer initiation, microbial evolution and evolution of drug resistance. Environmental interactions can be asymmetric, that is, they affect the fitness of one type more than the other. The question is how local environmental variations in network population structures change the selection dynamics in a finite population setting. We consider one-dimensional lattice population structures with spatial fitness distributions with a periodic pattern. Heterogeneity is determined by standard deviation of fitnesses and period. The model covers biologically relevant limits of two-habitat subdivided populations and randomly-distributed resources in high- and low-periods. We numerically calculate fixation probability and fixation times for a constant population birth-death process as fitness heterogeneity and period vary. We identify levels of heterogeneity for which a previously deleterious mutant, in a uniform environment, becomes beneficial. In other regimes of the problem we observe unexpected behavior where the fixation probability of both types are larger than their neutral value at the same time. This coincides with an exponential increase in time to fixation as a function of population size, which points to significant slow-down in selection process and the potential for coexistence between types in realistic time scales. We also discuss `fitness shift' model where the fitness function of one type is identical to the other up to a constant spatial shift. This leads to significant increase (or decrease) in the fixation probability of the mutant depending the value of the shift.
q-bio/0410008
Lior Pachter
Roderic Guigo, Ewan Birney, Michael Brent, Emmanouil Dermitzakis, Lior Pachter, Hugues Roest Crollius, Victor Solovyev, Michael Q. Zhang
Needed for completion of the human genome: hypothesis driven experiments and biologically realistic mathematical models
Report and discussion resulting from the `Fundacio La Caixa' gene finding meeting held November 21 and 22 2003 in Barcelona
null
null
null
q-bio.GN
null
With the sponsorship of ``Fundacio La Caixa'' we met in Barcelona, November 21st and 22nd, to analyze the reasons why, after the completion of the human genome sequence, the identification all protein coding genes and their variants remains a distant goal. Here we report on our discussions and summarize some of the major challenges that need to be overcome in order to complete the human gene catalog.
[ { "created": "Wed, 6 Oct 2004 22:08:40 GMT", "version": "v1" } ]
2007-05-23
[ [ "Guigo", "Roderic", "" ], [ "Birney", "Ewan", "" ], [ "Brent", "Michael", "" ], [ "Dermitzakis", "Emmanouil", "" ], [ "Pachter", "Lior", "" ], [ "Crollius", "Hugues Roest", "" ], [ "Solovyev", "Victor", "" ], [ "Zhang", "Michael Q.", "" ] ]
With the sponsorship of ``Fundacio La Caixa'' we met in Barcelona, November 21st and 22nd, to analyze the reasons why, after the completion of the human genome sequence, the identification all protein coding genes and their variants remains a distant goal. Here we report on our discussions and summarize some of the major challenges that need to be overcome in order to complete the human gene catalog.
2008.03135
Babacar Mbaye Ndiaye
Babacar Mbaye Ndiaye, Mouhamadou A.M.T. Balde, Diaraf Seck
Visualization and machine learning for forecasting of COVID-19 in Senegal
null
null
null
null
q-bio.PE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article, we give visualization and different machine learning technics for two weeks and 40 days ahead forecast based on public data. On July 15, 2020, Senegal reopened its airspace doors, while the number of confirmed cases is still increasing. The population no longer respects hygiene measures, social distancing as at the beginning of the contamination. Negligence or tiredness to always wear the masks? We make forecasting on the inflection point and possible ending time.
[ { "created": "Thu, 6 Aug 2020 15:50:30 GMT", "version": "v1" } ]
2020-08-10
[ [ "Ndiaye", "Babacar Mbaye", "" ], [ "Balde", "Mouhamadou A. M. T.", "" ], [ "Seck", "Diaraf", "" ] ]
In this article, we give visualization and different machine learning technics for two weeks and 40 days ahead forecast based on public data. On July 15, 2020, Senegal reopened its airspace doors, while the number of confirmed cases is still increasing. The population no longer respects hygiene measures, social distancing as at the beginning of the contamination. Negligence or tiredness to always wear the masks? We make forecasting on the inflection point and possible ending time.
2006.00280
Changchuan Yin Dr.
Changchuan Yin
Dinucleotide repeats in coronavirus SARS-CoV-2 genome: evolutionary implications
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ongoing global pandemic of infection disease COVID-19 caused by the 2019 novel coronavirus (SARS-COV-2, formerly 2019-nCoV) presents critical threats to public health and the economy since it was identified in China, December 2019. The genome of SARS-CoV-2 had been sequenced and structurally annotated, yet little is known of the intrinsic organization and evolution of the genome. To this end, we present a mathematical method for the genomic spectrum, a kind of barcode, of SARS-CoV-2 and common human coronaviruses. The genomic spectrum is constructed according to the periodic distributions of nucleotides, and therefore reflects the unique characteristics of the genome. The results demonstrate that coronavirus SARS-CoV-2 exhibits dinucleotide TT islands in the non-structural proteins 3, 4, 5, and 6. Further analysis of the dinucleotide regions suggests that the dinucleotide repeats are increased during evolution and may confer the evolutionary fitness of the virus. The special dinucleotide regions in the SARS-CoV-2 genome identified in this study may become diagnostic and pharmaceutical targets in monitoring and curing the COVID-19 disease.
[ { "created": "Sat, 30 May 2020 14:17:50 GMT", "version": "v1" } ]
2020-06-02
[ [ "Yin", "Changchuan", "" ] ]
The ongoing global pandemic of infection disease COVID-19 caused by the 2019 novel coronavirus (SARS-COV-2, formerly 2019-nCoV) presents critical threats to public health and the economy since it was identified in China, December 2019. The genome of SARS-CoV-2 had been sequenced and structurally annotated, yet little is known of the intrinsic organization and evolution of the genome. To this end, we present a mathematical method for the genomic spectrum, a kind of barcode, of SARS-CoV-2 and common human coronaviruses. The genomic spectrum is constructed according to the periodic distributions of nucleotides, and therefore reflects the unique characteristics of the genome. The results demonstrate that coronavirus SARS-CoV-2 exhibits dinucleotide TT islands in the non-structural proteins 3, 4, 5, and 6. Further analysis of the dinucleotide regions suggests that the dinucleotide repeats are increased during evolution and may confer the evolutionary fitness of the virus. The special dinucleotide regions in the SARS-CoV-2 genome identified in this study may become diagnostic and pharmaceutical targets in monitoring and curing the COVID-19 disease.
2108.01210
Joel Ye
Joel Ye, Chethan Pandarinath
Representation learning for neural population activity with Neural Data Transformers
null
null
10.51628/001c.27358
null
q-bio.NC cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Neural population activity is theorized to reflect an underlying dynamical structure. This structure can be accurately captured using state space models with explicit dynamics, such as those based on recurrent neural networks (RNNs). However, using recurrence to explicitly model dynamics necessitates sequential processing of data, slowing real-time applications such as brain-computer interfaces. Here we introduce the Neural Data Transformer (NDT), a non-recurrent alternative. We test the NDT's ability to capture autonomous dynamical systems by applying it to synthetic datasets with known dynamics and data from monkey motor cortex during a reaching task well-modeled by RNNs. The NDT models these datasets as well as state-of-the-art recurrent models. Further, its non-recurrence enables 3.9ms inference, well within the loop time of real-time applications and more than 6 times faster than recurrent baselines on the monkey reaching dataset. These results suggest that an explicit dynamics model is not necessary to model autonomous neural population dynamics. Code: https://github.com/snel-repo/neural-data-transformers
[ { "created": "Mon, 2 Aug 2021 23:36:39 GMT", "version": "v1" } ]
2023-07-21
[ [ "Ye", "Joel", "" ], [ "Pandarinath", "Chethan", "" ] ]
Neural population activity is theorized to reflect an underlying dynamical structure. This structure can be accurately captured using state space models with explicit dynamics, such as those based on recurrent neural networks (RNNs). However, using recurrence to explicitly model dynamics necessitates sequential processing of data, slowing real-time applications such as brain-computer interfaces. Here we introduce the Neural Data Transformer (NDT), a non-recurrent alternative. We test the NDT's ability to capture autonomous dynamical systems by applying it to synthetic datasets with known dynamics and data from monkey motor cortex during a reaching task well-modeled by RNNs. The NDT models these datasets as well as state-of-the-art recurrent models. Further, its non-recurrence enables 3.9ms inference, well within the loop time of real-time applications and more than 6 times faster than recurrent baselines on the monkey reaching dataset. These results suggest that an explicit dynamics model is not necessary to model autonomous neural population dynamics. Code: https://github.com/snel-repo/neural-data-transformers
1011.1192
Dante Chialvo
Dante R. Chialvo, Daniel Fraiman
What kind of noise is brain noise: anomalous scaling behavior of the resting brain activity fluctuations
null
Frontiers in Fractals Physiology, (3) 307 (2012)
10.3389/fphys.2012.00307
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The continuous interaction between brain regions "at rest" defines the so-called resting state networks (RSN) which can be reconstructed from the analysis of functional magnetic resonance imaging (fMRI) data. What dynamical mechanism allows for a flexible large-scale organization of the RSN still remains an important challenge. Here, three key novel properties of the RSN are uncovered. First, the correlation length (i.e., the length at which correlation between two regions vanishes) diverges with the cluster's size considered. Second, this divergence it is observed also for measures of mutual information. Third, the variance of the fMRI mean signal remains constant across the entire range of observed clusters sizes, in contrast with naive expectations. The unveiled scale invariance exposes the RSN optimal information-sharing properties across very diverse networks sizes, architectures and functions, which can be an important marker of healthy brain dynamics.
[ { "created": "Thu, 4 Nov 2010 15:54:51 GMT", "version": "v1" }, { "created": "Thu, 9 Aug 2012 09:50:50 GMT", "version": "v2" } ]
2012-08-10
[ [ "Chialvo", "Dante R.", "" ], [ "Fraiman", "Daniel", "" ] ]
The continuous interaction between brain regions "at rest" defines the so-called resting state networks (RSN) which can be reconstructed from the analysis of functional magnetic resonance imaging (fMRI) data. What dynamical mechanism allows for a flexible large-scale organization of the RSN still remains an important challenge. Here, three key novel properties of the RSN are uncovered. First, the correlation length (i.e., the length at which correlation between two regions vanishes) diverges with the cluster's size considered. Second, this divergence it is observed also for measures of mutual information. Third, the variance of the fMRI mean signal remains constant across the entire range of observed clusters sizes, in contrast with naive expectations. The unveiled scale invariance exposes the RSN optimal information-sharing properties across very diverse networks sizes, architectures and functions, which can be an important marker of healthy brain dynamics.
2307.11555
Laurent Perrinet
Laurent U Perrinet
Accurate Detection of Spiking Motifs by Learning Heterogeneous Delays of a Spiking Neural Network
ICANN 2023 Special Session on Recent Advances in Spiking Neural Networks - Conference paper
null
10.1007/978-3-031-44207-0_31
null
q-bio.NC
http://creativecommons.org/licenses/by-sa/4.0/
Recently, interest has grown in exploring the hypothesis that neural activity conveys information through precise spiking motifs. To investigate this phenomenon, various algorithms have been proposed to detect such motifs in Single Unit Activity (SUA) recorded from populations of neurons. In this study, we present a novel detection model based on the inversion of a generative model of raster plot synthesis. Using this generative model, we derive an optimal detection procedure that takes the form of logistic regression combined with temporal convolution. A key advantage of this model is its differentiability, which allows us to formulate a supervised learning approach using a gradient descent on the binary cross-entropy loss. To assess the model's ability to detect spiking motifs in synthetic data, we first perform numerical evaluations. This analysis highlights the advantages of using spiking motifs over traditional firing rate based population codes. We then successfully demonstrate that our learning method can recover synthetically generated spiking motifs, indicating its potential for further applications. In the future, we aim to extend this method to real neurobiological data, where the ground truth is unknown, to explore and detect spiking motifs in a more natural and biologically relevant context.
[ { "created": "Fri, 21 Jul 2023 13:04:28 GMT", "version": "v1" }, { "created": "Mon, 25 Sep 2023 19:29:15 GMT", "version": "v2" } ]
2023-09-27
[ [ "Perrinet", "Laurent U", "" ] ]
Recently, interest has grown in exploring the hypothesis that neural activity conveys information through precise spiking motifs. To investigate this phenomenon, various algorithms have been proposed to detect such motifs in Single Unit Activity (SUA) recorded from populations of neurons. In this study, we present a novel detection model based on the inversion of a generative model of raster plot synthesis. Using this generative model, we derive an optimal detection procedure that takes the form of logistic regression combined with temporal convolution. A key advantage of this model is its differentiability, which allows us to formulate a supervised learning approach using a gradient descent on the binary cross-entropy loss. To assess the model's ability to detect spiking motifs in synthetic data, we first perform numerical evaluations. This analysis highlights the advantages of using spiking motifs over traditional firing rate based population codes. We then successfully demonstrate that our learning method can recover synthetically generated spiking motifs, indicating its potential for further applications. In the future, we aim to extend this method to real neurobiological data, where the ground truth is unknown, to explore and detect spiking motifs in a more natural and biologically relevant context.
1803.02953
Hang Xie
Hang Xie, Yang Jiao, Qihui Fan, Miaomiao Hai, Jiaen Yang, Zhijian Hu, Yue Yang, Jianwei Shuai, Guo Chen, Ruchuan Liu, Liyu Liu
Modeling Three-dimensional Invasive Solid Tumor Growth in Heterogeneous Microenvironment under Chemotherapy
41 pages, 8 figures
null
10.1371/journal.pone.0206292
null
q-bio.TO q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A systematic understanding of the evolution and growth dynamics of invasive solid tumors in response to different chemotherapy strategies is crucial for the development of individually optimized oncotherapy. Here, we develop a hybrid three-dimensional (3D) computational model that integrates pharmacokinetic model, continuum diffusion-reaction model and discrete cell automaton model to investigate 3D invasive solid tumor growth in heterogeneous microenvironment under chemotherapy. Specifically, we consider the effects of heterogeneous environment on drug diffusion, tumor growth, invasion and the drug-tumor interaction on individual cell level. We employ the hybrid model to investigate the evolution and growth dynamics of avascular invasive solid tumors under different chemotherapy strategies. Our simulations reproduce the well-established observation that constant dosing is generally more effective in suppressing primary tumor growth than periodic dosing, due to the resulting continuous high drug concentration. In highly heterogeneous microenvironment, the malignancy of the tumor is significantly enhanced, leading to inefficiency of chemotherapies. The effects of geometrically-confined microenvironment and non-uniform drug dosing are also investigated. Our computational model, when supplemented with sufficient clinical data, could eventually lead to the development of efficient in silico tools for prognosis and treatment strategy optimization.
[ { "created": "Thu, 8 Mar 2018 03:28:50 GMT", "version": "v1" } ]
2020-07-01
[ [ "Xie", "Hang", "" ], [ "Jiao", "Yang", "" ], [ "Fan", "Qihui", "" ], [ "Hai", "Miaomiao", "" ], [ "Yang", "Jiaen", "" ], [ "Hu", "Zhijian", "" ], [ "Yang", "Yue", "" ], [ "Shuai", "Jianwei", "" ], [ "Chen", "Guo", "" ], [ "Liu", "Ruchuan", "" ], [ "Liu", "Liyu", "" ] ]
A systematic understanding of the evolution and growth dynamics of invasive solid tumors in response to different chemotherapy strategies is crucial for the development of individually optimized oncotherapy. Here, we develop a hybrid three-dimensional (3D) computational model that integrates pharmacokinetic model, continuum diffusion-reaction model and discrete cell automaton model to investigate 3D invasive solid tumor growth in heterogeneous microenvironment under chemotherapy. Specifically, we consider the effects of heterogeneous environment on drug diffusion, tumor growth, invasion and the drug-tumor interaction on individual cell level. We employ the hybrid model to investigate the evolution and growth dynamics of avascular invasive solid tumors under different chemotherapy strategies. Our simulations reproduce the well-established observation that constant dosing is generally more effective in suppressing primary tumor growth than periodic dosing, due to the resulting continuous high drug concentration. In highly heterogeneous microenvironment, the malignancy of the tumor is significantly enhanced, leading to inefficiency of chemotherapies. The effects of geometrically-confined microenvironment and non-uniform drug dosing are also investigated. Our computational model, when supplemented with sufficient clinical data, could eventually lead to the development of efficient in silico tools for prognosis and treatment strategy optimization.
1811.01935
Sudeepto Bhattacharya Dr
Shashankaditya Upadhyay, Sudeepto Bhattacharya
A comparative study of ecological networks using spectral projections of normalized graph Laplacian
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ecological networks originating as a result of three different ecological processes are examined and cross-compared to assess if the underlying ecological processes in these systems produce considerable difference in the structure of the networks. Absence of any significant difference in the structure of the networks may indicate towards the possibility of a universal structural pattern in these ecological networks. The underlying graphs of the networks derived by the ecological processes, namely host-parasite interaction, plant pollination and seed dispersion are all bipartite graphs and thus several algebraic structural measures fail to distinguish between the structure of these networks. In this work we use weighted spectral distribution (WSD) of normalized graph Laplacian, which have been effectively used earlier to discriminate graphs with different topologies, to investigate the possibility of existence of structural dissimilarity in these networks. Graph spectrum is often considered a signature of the graph and WSD of the graph Laplacian is shown to be related to the distribution of some small subgraphs in a graph and hence represent the global structure of a network.We use random projections of WSD to $\mathbb{R}^{2}$ and $\mathbb{R}^{3}$ and establish that the structure of plant pollinator networks is significantly different as compared to host-parasite and seed dispersal networks. The structures of host parasite networks and seed dispersal networks are found to be identical. Furthermore, we use some algebraic structural measures in order to quantify the differences as well as similarities observed in the structure of the three kinds of networks. We thus infer that our work suggests an absence of a universal structural pattern in these three different kinds of networks.
[ { "created": "Mon, 5 Nov 2018 05:49:03 GMT", "version": "v1" } ]
2018-11-07
[ [ "Upadhyay", "Shashankaditya", "" ], [ "Bhattacharya", "Sudeepto", "" ] ]
Ecological networks originating as a result of three different ecological processes are examined and cross-compared to assess if the underlying ecological processes in these systems produce considerable difference in the structure of the networks. Absence of any significant difference in the structure of the networks may indicate towards the possibility of a universal structural pattern in these ecological networks. The underlying graphs of the networks derived by the ecological processes, namely host-parasite interaction, plant pollination and seed dispersion are all bipartite graphs and thus several algebraic structural measures fail to distinguish between the structure of these networks. In this work we use weighted spectral distribution (WSD) of normalized graph Laplacian, which have been effectively used earlier to discriminate graphs with different topologies, to investigate the possibility of existence of structural dissimilarity in these networks. Graph spectrum is often considered a signature of the graph and WSD of the graph Laplacian is shown to be related to the distribution of some small subgraphs in a graph and hence represent the global structure of a network.We use random projections of WSD to $\mathbb{R}^{2}$ and $\mathbb{R}^{3}$ and establish that the structure of plant pollinator networks is significantly different as compared to host-parasite and seed dispersal networks. The structures of host parasite networks and seed dispersal networks are found to be identical. Furthermore, we use some algebraic structural measures in order to quantify the differences as well as similarities observed in the structure of the three kinds of networks. We thus infer that our work suggests an absence of a universal structural pattern in these three different kinds of networks.
2408.05258
Wenwen Min
Wenwen Min, Zhen Wang, Fangfang Zhu, Taosheng Xu, Shunfang Wang
scASDC: Attention Enhanced Structural Deep Clustering for Single-cell RNA-seq Data
null
null
null
null
q-bio.GN cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single-cell RNA sequencing (scRNA-seq) data analysis is pivotal for understanding cellular heterogeneity. However, the high sparsity and complex noise patterns inherent in scRNA-seq data present significant challenges for traditional clustering methods. To address these issues, we propose a deep clustering method, Attention-Enhanced Structural Deep Embedding Graph Clustering (scASDC), which integrates multiple advanced modules to improve clustering accuracy and robustness.Our approach employs a multi-layer graph convolutional network (GCN) to capture high-order structural relationships between cells, termed as the graph autoencoder module. To mitigate the oversmoothing issue in GCNs, we introduce a ZINB-based autoencoder module that extracts content information from the data and learns latent representations of gene expression. These modules are further integrated through an attention fusion mechanism, ensuring effective combination of gene expression and structural information at each layer of the GCN. Additionally, a self-supervised learning module is incorporated to enhance the robustness of the learned embeddings. Extensive experiments demonstrate that scASDC outperforms existing state-of-the-art methods, providing a robust and effective solution for single-cell clustering tasks. Our method paves the way for more accurate and meaningful analysis of single-cell RNA sequencing data, contributing to better understanding of cellular heterogeneity and biological processes. All code and public datasets used in this paper are available at \url{https://github.com/wenwenmin/scASDC} and \url{https://zenodo.org/records/12814320}.
[ { "created": "Fri, 9 Aug 2024 09:10:36 GMT", "version": "v1" } ]
2024-08-13
[ [ "Min", "Wenwen", "" ], [ "Wang", "Zhen", "" ], [ "Zhu", "Fangfang", "" ], [ "Xu", "Taosheng", "" ], [ "Wang", "Shunfang", "" ] ]
Single-cell RNA sequencing (scRNA-seq) data analysis is pivotal for understanding cellular heterogeneity. However, the high sparsity and complex noise patterns inherent in scRNA-seq data present significant challenges for traditional clustering methods. To address these issues, we propose a deep clustering method, Attention-Enhanced Structural Deep Embedding Graph Clustering (scASDC), which integrates multiple advanced modules to improve clustering accuracy and robustness.Our approach employs a multi-layer graph convolutional network (GCN) to capture high-order structural relationships between cells, termed as the graph autoencoder module. To mitigate the oversmoothing issue in GCNs, we introduce a ZINB-based autoencoder module that extracts content information from the data and learns latent representations of gene expression. These modules are further integrated through an attention fusion mechanism, ensuring effective combination of gene expression and structural information at each layer of the GCN. Additionally, a self-supervised learning module is incorporated to enhance the robustness of the learned embeddings. Extensive experiments demonstrate that scASDC outperforms existing state-of-the-art methods, providing a robust and effective solution for single-cell clustering tasks. Our method paves the way for more accurate and meaningful analysis of single-cell RNA sequencing data, contributing to better understanding of cellular heterogeneity and biological processes. All code and public datasets used in this paper are available at \url{https://github.com/wenwenmin/scASDC} and \url{https://zenodo.org/records/12814320}.
1311.2643
Mark McDonnell
Brett A. Schmerl and Mark D. McDonnell
Channel noise induced stochastic facilitation in an auditory brainstem neuron model
Published by Physical Review E, November 2013 (this version 17 pages total - 10 text, 1 refs, 6 figures/tables); Associated matlab code is available online in the ModelDB repository at http://senselab.med.yale.edu/ModelDB/ShowModel.asp?model=151483
Physical Review E, 88: 052722, 2013
10.1103/PhysRevE.88.052722
null
q-bio.NC q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neuronal membrane potentials fluctuate stochastically due to conductance changes caused by random transitions between the open and close states of ion channels. Although it has previously been shown that channel noise can nontrivially affect neuronal dynamics, it is unknown whether ion-channel noise is strong enough to act as a noise source for hypothesised noise-enhanced information processing in real neuronal systems, i.e. 'stochastic facilitation.' Here, we demonstrate that biophysical models of channel noise can give rise to two kinds of recently discovered stochastic facilitation effects in a Hodgkin-Huxley-like model of auditory brainstem neurons. The first, known as slope-based stochastic resonance (SBSR), enables phasic neurons to emit action potentials that can encode the slope of inputs that vary slowly relative to key time-constants in the model. The second, known as inverse stochastic resonance (ISR), occurs in tonically firing neurons when small levels of noise inhibit tonic firing and replace it with burst-like dynamics. Consistent with previous work, we conclude that channel noise can provide significant variability in firing dynamics, even for large numbers of channels. Moreover, our results show that possible associated computational benefits may occur due to channel noise in neurons of the auditory brainstem. This holds whether the firing dynamics in the model are phasic (SBSR can occur due to channel noise) or tonic (ISR can occur due to channel noise).
[ { "created": "Mon, 11 Nov 2013 23:00:00 GMT", "version": "v1" }, { "created": "Thu, 5 Dec 2013 11:40:54 GMT", "version": "v2" } ]
2013-12-06
[ [ "Schmerl", "Brett A.", "" ], [ "McDonnell", "Mark D.", "" ] ]
Neuronal membrane potentials fluctuate stochastically due to conductance changes caused by random transitions between the open and close states of ion channels. Although it has previously been shown that channel noise can nontrivially affect neuronal dynamics, it is unknown whether ion-channel noise is strong enough to act as a noise source for hypothesised noise-enhanced information processing in real neuronal systems, i.e. 'stochastic facilitation.' Here, we demonstrate that biophysical models of channel noise can give rise to two kinds of recently discovered stochastic facilitation effects in a Hodgkin-Huxley-like model of auditory brainstem neurons. The first, known as slope-based stochastic resonance (SBSR), enables phasic neurons to emit action potentials that can encode the slope of inputs that vary slowly relative to key time-constants in the model. The second, known as inverse stochastic resonance (ISR), occurs in tonically firing neurons when small levels of noise inhibit tonic firing and replace it with burst-like dynamics. Consistent with previous work, we conclude that channel noise can provide significant variability in firing dynamics, even for large numbers of channels. Moreover, our results show that possible associated computational benefits may occur due to channel noise in neurons of the auditory brainstem. This holds whether the firing dynamics in the model are phasic (SBSR can occur due to channel noise) or tonic (ISR can occur due to channel noise).
2007.16151
Jason Hindes
Jason Hindes, Simone Bianco, and Ira B. Schwartz
Optimal periodic closure for minimizing risk in emerging disease outbreaks
supplementary material included in ancillary files
PLoS ONE 16(1): e0244706 (2021)
10.1371/journal.pone.0244706
null
q-bio.PE physics.bio-ph physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Without vaccines and treatments, societies must rely on non-pharmaceutical intervention strategies to control the spread of emerging diseases such as COVID-19. Though complete lockdown is epidemiologically effective, because it eliminates infectious contacts, it comes with significant costs. Several recent studies have suggested that a plausible compromise strategy for minimizing epidemic risk is periodic closure, in which populations oscillate between wide-spread social restrictions and relaxation. However, no underlying theory has been proposed to predict and explain optimal closure periods as a function of epidemiological and social parameters. In this work we develop such an analytical theory for SEIR-like model diseases, showing how characteristic closure periods emerge that minimize the total outbreak, and increase predictably with the reproductive number and incubation periods of a disease, as long as both are within predictable limits. Using our approach we demonstrate a sweet-spot effect in which optimal periodic closure is maximally effective for diseases with similar incubation and recovery periods. Our results compare well to numerical simulations, including in COVID-19 models where infectivity and recovery show significant variability.
[ { "created": "Fri, 31 Jul 2020 16:10:39 GMT", "version": "v1" }, { "created": "Tue, 5 Jan 2021 15:04:53 GMT", "version": "v2" } ]
2021-01-08
[ [ "Hindes", "Jason", "" ], [ "Bianco", "Simone", "" ], [ "Schwartz", "Ira B.", "" ] ]
Without vaccines and treatments, societies must rely on non-pharmaceutical intervention strategies to control the spread of emerging diseases such as COVID-19. Though complete lockdown is epidemiologically effective, because it eliminates infectious contacts, it comes with significant costs. Several recent studies have suggested that a plausible compromise strategy for minimizing epidemic risk is periodic closure, in which populations oscillate between wide-spread social restrictions and relaxation. However, no underlying theory has been proposed to predict and explain optimal closure periods as a function of epidemiological and social parameters. In this work we develop such an analytical theory for SEIR-like model diseases, showing how characteristic closure periods emerge that minimize the total outbreak, and increase predictably with the reproductive number and incubation periods of a disease, as long as both are within predictable limits. Using our approach we demonstrate a sweet-spot effect in which optimal periodic closure is maximally effective for diseases with similar incubation and recovery periods. Our results compare well to numerical simulations, including in COVID-19 models where infectivity and recovery show significant variability.
1905.09104
S{\o}ren Toxvaerd
S{\o}ren Toxvaerd
A Prerequisite for Life
null
Journal of Theoretical Biology 474, 48-51, (2019)
10.1016/j.jtbi.2019.05.001
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The complex physicochemical structures and chemical reactions in living organism have some common features: (1) The life processes take place in the cytosol in the cells, which, from a physicochemical point of view is an emulsion of biomolecules in a dilute aqueous suspension. (2) All living systems are homochiral with respect to the units of amino acids and carbohydrates, but (some) proteins are chiral unstable in the cytosol. (3) And living organism are mortal. These three common features together give a prerequisite for the prebiotic self-assembly at the start of the Abiogenesis. Here we argue , that it all together indicates, that the prebiotic self-assembly of structures and reactions took place in a more saline environment, whereby the homochirality of proteins not only could be obtained, but also preserved. A more saline environment for the prebiotic self-assembly of organic molecules and establishment of biochemical reactions could have been the hydrothermal vents.
[ { "created": "Wed, 22 May 2019 12:34:29 GMT", "version": "v1" } ]
2019-05-23
[ [ "Toxvaerd", "Søren", "" ] ]
The complex physicochemical structures and chemical reactions in living organism have some common features: (1) The life processes take place in the cytosol in the cells, which, from a physicochemical point of view is an emulsion of biomolecules in a dilute aqueous suspension. (2) All living systems are homochiral with respect to the units of amino acids and carbohydrates, but (some) proteins are chiral unstable in the cytosol. (3) And living organism are mortal. These three common features together give a prerequisite for the prebiotic self-assembly at the start of the Abiogenesis. Here we argue , that it all together indicates, that the prebiotic self-assembly of structures and reactions took place in a more saline environment, whereby the homochirality of proteins not only could be obtained, but also preserved. A more saline environment for the prebiotic self-assembly of organic molecules and establishment of biochemical reactions could have been the hydrothermal vents.
1109.3670
Heather Harrington
Heather A. Harrington, Kenneth L. Ho, Thomas Thorne, and Michael P. H. Stumpf
A parameter-free model discrimination criterion based on steady-state coplanarity
13 pages, 3 figures. In press, PNAS
null
10.1073/pnas.1117073109
null
q-bio.QM math.AG math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe a novel procedure for deciding when a mass-action model is incompatible with observed steady-state data that does not require any parameter estimation. Thus, we avoid the difficulties of nonlinear optimization typically associated with methods based on parameter fitting. The key idea is to use the model equations to construct a transformation of the original variables such that any set of steady states of the model under that transformation lies on a common plane, irrespective of the values of the model parameters. Model rejection can then be performed by assessing the degree to which the transformed data deviate from coplanarity. We demonstrate our method by applying it to models of multisite phosphorylation and cell death signaling. Although somewhat limited at present, our work provides an important first step towards a parameter-free framework for data-driven model selection.
[ { "created": "Fri, 16 Sep 2011 17:27:57 GMT", "version": "v1" }, { "created": "Wed, 12 Oct 2011 13:34:01 GMT", "version": "v2" }, { "created": "Thu, 16 Aug 2012 14:40:24 GMT", "version": "v3" } ]
2015-05-30
[ [ "Harrington", "Heather A.", "" ], [ "Ho", "Kenneth L.", "" ], [ "Thorne", "Thomas", "" ], [ "Stumpf", "Michael P. H.", "" ] ]
We describe a novel procedure for deciding when a mass-action model is incompatible with observed steady-state data that does not require any parameter estimation. Thus, we avoid the difficulties of nonlinear optimization typically associated with methods based on parameter fitting. The key idea is to use the model equations to construct a transformation of the original variables such that any set of steady states of the model under that transformation lies on a common plane, irrespective of the values of the model parameters. Model rejection can then be performed by assessing the degree to which the transformed data deviate from coplanarity. We demonstrate our method by applying it to models of multisite phosphorylation and cell death signaling. Although somewhat limited at present, our work provides an important first step towards a parameter-free framework for data-driven model selection.
1904.06514
Fabrizio Pucci Dr.
Fabrizio Pucci, Alexander Schug
Shedding light on the dark matter of the biomolecular structural universe: Progress in RNA 3D structure prediction
14 pages, 4 figures
null
null
null
q-bio.MN physics.bio-ph q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Structured RNA plays many functionally relevant roles in molecular life. Structural information, while required to understand the functional cycles in detail, is challenging to gather. Computational methods promise to complement experimental efforts by predicting three-dimensional RNA models. Here, we provide a concise view of the state of the art methodologies with a focus on the strengths and the weaknesses of the different approaches. Furthermore, we analyzed the recent developments regarding the use of coevolutionary information and how it can boost the prediction performances. We finally discuss some open perspectives and challenges for the near future in the RNA structural stability field.
[ { "created": "Sat, 13 Apr 2019 09:58:49 GMT", "version": "v1" } ]
2019-04-16
[ [ "Pucci", "Fabrizio", "" ], [ "Schug", "Alexander", "" ] ]
Structured RNA plays many functionally relevant roles in molecular life. Structural information, while required to understand the functional cycles in detail, is challenging to gather. Computational methods promise to complement experimental efforts by predicting three-dimensional RNA models. Here, we provide a concise view of the state of the art methodologies with a focus on the strengths and the weaknesses of the different approaches. Furthermore, we analyzed the recent developments regarding the use of coevolutionary information and how it can boost the prediction performances. We finally discuss some open perspectives and challenges for the near future in the RNA structural stability field.
1803.10996
Hyeongki Kim
Hyeongki Kim
Dihedral angle prediction using generative adversarial networks
72 pages, MSc thesis under the supervision of Assoc. Prof. Thomas Hamelryck and Asst. Prof. Wouter Boomsma
null
null
null
q-bio.BM cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Several dihedral angles prediction methods were developed for protein structure prediction and their other applications. However, distribution of predicted angles would not be similar to that of real angles. To address this we employed generative adversarial networks (GAN). Generative adversarial networks are composed of two adversarially trained networks: a discriminator and a generator. A discriminator distinguishes samples from a dataset and generated samples while a generator generates realistic samples. Although the discriminator of GANs is trained to estimate density, GAN model is intractable. On the other hand, noise-contrastive estimation (NCE) was introduced to estimate a normalization constant of an unnormalized statistical model and thus the density function. In this thesis, we introduce noise-contrastive estimation generative adversarial networks (NCE-GAN) which enables explicit density estimation of a GAN model. And a new loss for the generator is proposed. We also propose residue-wise variants of auxiliary classifier GAN (AC-GAN) and Semi-supervised GAN to handle sequence information in a window. In our experiment, the conditional generative adversarial network (C-GAN), AC-GAN and Semi-supervised GAN were compared. And experiments done with improved conditions were invested. We identified a phenomenon of AC-GAN that distribution of its predicted angles is composed of unusual clusters. The distribution of the predicted angles of Semi-supervised GAN was most similar to the Ramachandran plot. We found that adding the output of the NCE as an additional input of the discriminator is helpful to stabilize the training of the GANs and to capture the detailed structures. Adding regression loss and using predicted angles by regression loss only model could improve the conditional generation performance of the C-GAN and AC-GAN.
[ { "created": "Thu, 29 Mar 2018 10:02:14 GMT", "version": "v1" } ]
2018-03-30
[ [ "Kim", "Hyeongki", "" ] ]
Several dihedral angles prediction methods were developed for protein structure prediction and their other applications. However, distribution of predicted angles would not be similar to that of real angles. To address this we employed generative adversarial networks (GAN). Generative adversarial networks are composed of two adversarially trained networks: a discriminator and a generator. A discriminator distinguishes samples from a dataset and generated samples while a generator generates realistic samples. Although the discriminator of GANs is trained to estimate density, GAN model is intractable. On the other hand, noise-contrastive estimation (NCE) was introduced to estimate a normalization constant of an unnormalized statistical model and thus the density function. In this thesis, we introduce noise-contrastive estimation generative adversarial networks (NCE-GAN) which enables explicit density estimation of a GAN model. And a new loss for the generator is proposed. We also propose residue-wise variants of auxiliary classifier GAN (AC-GAN) and Semi-supervised GAN to handle sequence information in a window. In our experiment, the conditional generative adversarial network (C-GAN), AC-GAN and Semi-supervised GAN were compared. And experiments done with improved conditions were invested. We identified a phenomenon of AC-GAN that distribution of its predicted angles is composed of unusual clusters. The distribution of the predicted angles of Semi-supervised GAN was most similar to the Ramachandran plot. We found that adding the output of the NCE as an additional input of the discriminator is helpful to stabilize the training of the GANs and to capture the detailed structures. Adding regression loss and using predicted angles by regression loss only model could improve the conditional generation performance of the C-GAN and AC-GAN.
q-bio/0603017
Eugene Shakhnovich
D.B. Lyjatsky and E.I.Shakhnovich
Enhanced self-attraction of proteins and its evolutionary implications
null
null
null
null
q-bio.BM q-bio.MN
null
Statistical analysis of protein-protein interactions shows anomalously high frequency of homodimers [Ispolatov, I., et al. (2005) Nucleic Acids Res 33, 3629-35]. Furthermore, recent findings [Wright, C.F., et al. (2005) Nature 438, 878-81] demonstrate that maintaining low sequence identity is a key evolutionary mechanism that inhibits protein aggregation. Here, we study statistical properties of interacting protein-like surfaces and predict the effect of universal, enhanced self-attraction of proteins. The effect originates in the fact that a pattern self-match between two identical, even randomly organized interacting protein surfaces is always stronger compared to the pattern match between two different, promiscuous protein surfaces. This finding implies an increased probability of homodimer selection in the course of early evolution. Our simple model of early evolutionary selection of interacting proteins accurately reproduces the experimental data on homodimer interface aminoacid compositions. In addition, we predict that heterodimers evolved from homodimers with the negative design evolutionary pressure applied against promiscuous homodimer formation. We predict that the anti-homodimer negative design evolutionary signal is conveyed through the enrichment of heterodimeric interfaces in polar residues, and most profoundly in glutamic acid and lysine, which is consistent with experimental findings. We predict therefore that the negative design against homodimers is the
[ { "created": "Thu, 16 Mar 2006 18:42:27 GMT", "version": "v1" } ]
2007-05-23
[ [ "Lyjatsky", "D. B.", "" ], [ "Shakhnovich", "E. I.", "" ] ]
Statistical analysis of protein-protein interactions shows anomalously high frequency of homodimers [Ispolatov, I., et al. (2005) Nucleic Acids Res 33, 3629-35]. Furthermore, recent findings [Wright, C.F., et al. (2005) Nature 438, 878-81] demonstrate that maintaining low sequence identity is a key evolutionary mechanism that inhibits protein aggregation. Here, we study statistical properties of interacting protein-like surfaces and predict the effect of universal, enhanced self-attraction of proteins. The effect originates in the fact that a pattern self-match between two identical, even randomly organized interacting protein surfaces is always stronger compared to the pattern match between two different, promiscuous protein surfaces. This finding implies an increased probability of homodimer selection in the course of early evolution. Our simple model of early evolutionary selection of interacting proteins accurately reproduces the experimental data on homodimer interface aminoacid compositions. In addition, we predict that heterodimers evolved from homodimers with the negative design evolutionary pressure applied against promiscuous homodimer formation. We predict that the anti-homodimer negative design evolutionary signal is conveyed through the enrichment of heterodimeric interfaces in polar residues, and most profoundly in glutamic acid and lysine, which is consistent with experimental findings. We predict therefore that the negative design against homodimers is the
1312.5582
Andrew Dhawan
A. Dhawan, M. Kohandel, R. P. Hill, S. Sivaloganathan
Tumour Control Probability in Cancer Stem Cells Hypothesis
18 pages, 5 figures
null
10.1371/journal.pone.0096093
null
q-bio.QM q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The tumour control probability (TCP) is a formalism derived to compare various treatment regimens of radiation therapy, defined as the probability that given a prescribed dose of radiation, a tumour has been eradicated or controlled. In the traditional view of cancer, all cells share the ability to divide without limit and thus have the potential to generate a malignant tumour. However, an emerging notion is that only a sub-population of cells, the so-called cancer stem cells (CSCs), are responsible for the initiation and maintenance of the tumour. A key implication of the CSC hypothesis is that these cells must be eradicated to achieve cures, thus we define TCP_S as the probability of eradicating CSCs for a given dose of radiation. A cell surface protein expression profile, such as CD44high/CD24low for breast cancer, is often used as a biomarker to monitor CSCs enrichment. However, it is increasingly recognized that not all cells bearing this expression profile are necessarily CSCs, and in particular early generations of progenitor cells may share the same phenotype. Thus, due to the lack of a perfect biomarker for CSCs, we also define a novel measurable TCP_CD+, that is the probability of eliminating or controlling biomarker positive cells. Based on these definitions, we use stochastic methods and numerical simulations to compare the theoretical TCP_S and the measurable TCP_CD+. We also use the measurable TCP to compare the effect of various radiation protocols.
[ { "created": "Thu, 19 Dec 2013 15:24:21 GMT", "version": "v1" }, { "created": "Thu, 2 Jan 2014 19:54:20 GMT", "version": "v2" } ]
2015-06-18
[ [ "Dhawan", "A.", "" ], [ "Kohandel", "M.", "" ], [ "Hill", "R. P.", "" ], [ "Sivaloganathan", "S.", "" ] ]
The tumour control probability (TCP) is a formalism derived to compare various treatment regimens of radiation therapy, defined as the probability that given a prescribed dose of radiation, a tumour has been eradicated or controlled. In the traditional view of cancer, all cells share the ability to divide without limit and thus have the potential to generate a malignant tumour. However, an emerging notion is that only a sub-population of cells, the so-called cancer stem cells (CSCs), are responsible for the initiation and maintenance of the tumour. A key implication of the CSC hypothesis is that these cells must be eradicated to achieve cures, thus we define TCP_S as the probability of eradicating CSCs for a given dose of radiation. A cell surface protein expression profile, such as CD44high/CD24low for breast cancer, is often used as a biomarker to monitor CSCs enrichment. However, it is increasingly recognized that not all cells bearing this expression profile are necessarily CSCs, and in particular early generations of progenitor cells may share the same phenotype. Thus, due to the lack of a perfect biomarker for CSCs, we also define a novel measurable TCP_CD+, that is the probability of eliminating or controlling biomarker positive cells. Based on these definitions, we use stochastic methods and numerical simulations to compare the theoretical TCP_S and the measurable TCP_CD+. We also use the measurable TCP to compare the effect of various radiation protocols.
2010.11641
Dmitry Ignatov
Dmitry I. Ignatov and Gennady V. Khvorykh and Andrey V. Khrunin and Stefan Nikoli\'c and Makhmud Shaban and Elizaveta A. Petrova and Evgeniya A. Koltsova and Fouzi Takelait and Dmitrii Egurnov
Object-Attribute Biclustering for Elimination of Missing Genotypes in Ischemic Stroke Genome-Wide Data
Accepted to AIST 2020
AIST 2020 (CCIS series)
null
null
q-bio.GN cs.LG stat.AP
http://creativecommons.org/licenses/by/4.0/
Missing genotypes can affect the efficacy of machine learning approaches to identify the risk genetic variants of common diseases and traits. The problem occurs when genotypic data are collected from different experiments with different DNA microarrays, each being characterised by its pattern of uncalled (missing) genotypes. This can prevent the machine learning classifier from assigning the classes correctly. To tackle this issue, we used well-developed notions of object-attribute biclusters and formal concepts that correspond to dense subrelations in the binary relation $\textit{patients} \times \textit{SNPs}$. The paper contains experimental results on applying a biclustering algorithm to a large real-world dataset collected for studying the genetic bases of ischemic stroke. The algorithm could identify large dense biclusters in the genotypic matrix for further processing, which in return significantly improved the quality of machine learning classifiers. The proposed algorithm was also able to generate biclusters for the whole dataset without size constraints in comparison to the In-Close4 algorithm for generation of formal concepts.
[ { "created": "Thu, 22 Oct 2020 12:27:43 GMT", "version": "v1" }, { "created": "Sun, 25 Oct 2020 10:29:44 GMT", "version": "v2" } ]
2020-10-27
[ [ "Ignatov", "Dmitry I.", "" ], [ "Khvorykh", "Gennady V.", "" ], [ "Khrunin", "Andrey V.", "" ], [ "Nikolić", "Stefan", "" ], [ "Shaban", "Makhmud", "" ], [ "Petrova", "Elizaveta A.", "" ], [ "Koltsova", "Evgeniya A.", "" ], [ "Takelait", "Fouzi", "" ], [ "Egurnov", "Dmitrii", "" ] ]
Missing genotypes can affect the efficacy of machine learning approaches to identify the risk genetic variants of common diseases and traits. The problem occurs when genotypic data are collected from different experiments with different DNA microarrays, each being characterised by its pattern of uncalled (missing) genotypes. This can prevent the machine learning classifier from assigning the classes correctly. To tackle this issue, we used well-developed notions of object-attribute biclusters and formal concepts that correspond to dense subrelations in the binary relation $\textit{patients} \times \textit{SNPs}$. The paper contains experimental results on applying a biclustering algorithm to a large real-world dataset collected for studying the genetic bases of ischemic stroke. The algorithm could identify large dense biclusters in the genotypic matrix for further processing, which in return significantly improved the quality of machine learning classifiers. The proposed algorithm was also able to generate biclusters for the whole dataset without size constraints in comparison to the In-Close4 algorithm for generation of formal concepts.
2005.01356
Didier Pinault
Didier Pinault (FMTS)
A single psychotomimetic dose of ketamine decreases thalamocortical spindles and delta oscillations in the sedated rat
Schizophrenia Research, Elsevier, In press
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: In patients with psychotic disorders, sleep spindles are reduced, supporting the hypothesis that the thalamus and glutamate receptors play a crucial etio-pathophysiological role, whose underlying mechanisms remain unknown. We hypothesized that a reduced function of NMDA receptors is involved in the spindle deficit observed in schizophrenia. Methods: An electrophysiological multisite cell-to-network exploration was used to investigate, in pentobarbital-sedated rats, the effects of a single psychotomimetic dose of the NMDA glutamate receptor antagonist ketamine in the sensorimotor and associative/cognitive thalamocortical (TC) systems. Results: Under the control condition, spontaneously-occurring spindles (intra-frequency: 10-16 waves/s) and delta-frequency (1-4Hz) oscillations were recorded in the frontoparietal cortical EEG, in thalamic extracellular recordings, in dual juxtacellularly recorded GABAergic thalamic reticular nucleus (TRN) and glutamatergic TC neurons, and in intracellularly recorded TC neurons. The TRN cells rhythmically exhibited robust high-frequency bursts of action potentials (7 to 15 APs at 200-700Hz). A single administration of low-dose ketamine fleetingly reduced TC spindles and delta oscillations, amplified ongoing gamma-(30-80Hz) and higher-frequency oscillations, and switched the firing pattern of both TC and TRN neurons from a burst mode to a single AP mode. Furthermore, ketamine strengthened the gamma-frequency band TRN-TC connectivity. The antipsychotic clozapine consistently prevented the ketamine effects on spindles, delta- and gamma-/higher-frequency TC oscillations. Conclusion: The present findings support the hypothesis that NMDA receptor hypofunction is involved in the reduction in sleep spindles and delta oscillations. The ketamine-induced swift conversion of ongoing TC-TRN activities may have involved at least both the ascending reticular activating system and the corticothalamic pathway.
[ { "created": "Mon, 4 May 2020 09:57:47 GMT", "version": "v1" } ]
2020-05-05
[ [ "Pinault", "Didier", "", "FMTS" ] ]
Background: In patients with psychotic disorders, sleep spindles are reduced, supporting the hypothesis that the thalamus and glutamate receptors play a crucial etio-pathophysiological role, whose underlying mechanisms remain unknown. We hypothesized that a reduced function of NMDA receptors is involved in the spindle deficit observed in schizophrenia. Methods: An electrophysiological multisite cell-to-network exploration was used to investigate, in pentobarbital-sedated rats, the effects of a single psychotomimetic dose of the NMDA glutamate receptor antagonist ketamine in the sensorimotor and associative/cognitive thalamocortical (TC) systems. Results: Under the control condition, spontaneously-occurring spindles (intra-frequency: 10-16 waves/s) and delta-frequency (1-4Hz) oscillations were recorded in the frontoparietal cortical EEG, in thalamic extracellular recordings, in dual juxtacellularly recorded GABAergic thalamic reticular nucleus (TRN) and glutamatergic TC neurons, and in intracellularly recorded TC neurons. The TRN cells rhythmically exhibited robust high-frequency bursts of action potentials (7 to 15 APs at 200-700Hz). A single administration of low-dose ketamine fleetingly reduced TC spindles and delta oscillations, amplified ongoing gamma-(30-80Hz) and higher-frequency oscillations, and switched the firing pattern of both TC and TRN neurons from a burst mode to a single AP mode. Furthermore, ketamine strengthened the gamma-frequency band TRN-TC connectivity. The antipsychotic clozapine consistently prevented the ketamine effects on spindles, delta- and gamma-/higher-frequency TC oscillations. Conclusion: The present findings support the hypothesis that NMDA receptor hypofunction is involved in the reduction in sleep spindles and delta oscillations. The ketamine-induced swift conversion of ongoing TC-TRN activities may have involved at least both the ascending reticular activating system and the corticothalamic pathway.
0704.2132
Roberto Chignola
C. Tomelleri, E. Milotti, C. Dalla Pellegrina, O. Perbellini, A. Del Fabbro, M. T. Scupoli and R. Chignola
A quantitative study on the growth variability of tumour cell clones in vitro
31 pages, 5 figures
null
null
null
q-bio.CB q-bio.QM
null
Objectives: In this study, we quantify the growth variability of tumour cell clones from a human leukemia cell line. Materials and methods: We have used microplate spectrophotometry to measure the growth kinetics of hundreds of individual cell clones from the Molt3 cell line. The growth rate of each clonal population has been estimated by fitting experimental data with the logistic equation. Results: The growth rates were observed to vary among different clones. Up to six clones with a growth rate above or below the mean growth rate of the parent population were further cloned and the growth rates of their offsprings were measured. The distribution of the growth rates of the subclones did not significantly differ from that of the parent population thus suggesting that growth variability has an epigenetic origin. To explain the observed distributions of clonal growth rates we have developed a probabilistic model assuming that the fluctuations in the number of mitochondria through successive cell cycles are the leading cause of growth variability. For fitting purposes, we have estimated experimentally by flow cytometry the maximum average number of mitochondria in Molt3 cells. The model fits nicely the observed distributions of growth rates, however, cells in which the mitochondria were rendered non functional (rho-0 cells) showed only a 30% reduction in the clonal growth variability with respect to normal cells. Conclusions: A tumor cell population is a dynamic ensemble of clones with highly variable growth rate. At least part of this variability is due to fluctuations in the number of mitochondria.
[ { "created": "Tue, 17 Apr 2007 10:30:56 GMT", "version": "v1" } ]
2007-05-23
[ [ "Tomelleri", "C.", "" ], [ "Milotti", "E.", "" ], [ "Pellegrina", "C. Dalla", "" ], [ "Perbellini", "O.", "" ], [ "Del Fabbro", "A.", "" ], [ "Scupoli", "M. T.", "" ], [ "Chignola", "R.", "" ] ]
Objectives: In this study, we quantify the growth variability of tumour cell clones from a human leukemia cell line. Materials and methods: We have used microplate spectrophotometry to measure the growth kinetics of hundreds of individual cell clones from the Molt3 cell line. The growth rate of each clonal population has been estimated by fitting experimental data with the logistic equation. Results: The growth rates were observed to vary among different clones. Up to six clones with a growth rate above or below the mean growth rate of the parent population were further cloned and the growth rates of their offsprings were measured. The distribution of the growth rates of the subclones did not significantly differ from that of the parent population thus suggesting that growth variability has an epigenetic origin. To explain the observed distributions of clonal growth rates we have developed a probabilistic model assuming that the fluctuations in the number of mitochondria through successive cell cycles are the leading cause of growth variability. For fitting purposes, we have estimated experimentally by flow cytometry the maximum average number of mitochondria in Molt3 cells. The model fits nicely the observed distributions of growth rates, however, cells in which the mitochondria were rendered non functional (rho-0 cells) showed only a 30% reduction in the clonal growth variability with respect to normal cells. Conclusions: A tumor cell population is a dynamic ensemble of clones with highly variable growth rate. At least part of this variability is due to fluctuations in the number of mitochondria.
2309.16498
Yi Jiang
Xiuxiu He, Kuangcai Chen, Ning Fang, Yi Jiang
Coordinates in low-dimensional cell shape-space discriminate migration dynamics from single static cell images
29 pages, 9 figures
null
null
null
q-bio.CB physics.bio-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
Cell shape has long been used to discern cell phenotypes and states, but the underlying premise has not been quantitatively tested. Here, we show that a single cell image can be used to discriminate its migration behavior by analyzing a large number of cell migration data in vitro. We analyzed a large number of two-dimensional cell migration images over time and found that the cell shape variation space has only six dimensions, and migration behavior can be determined by the coordinates of a single cell image in this 6-dimensional shape-space. We further show that this is possible because persistent cell migration is characterized by spatial-temporally coordinated protrusion and contraction, and a distribution signature in the shape-space. Our findings provide a quantitative underpinning for using cell morphology to differentiate cell dynamical behavior.
[ { "created": "Thu, 28 Sep 2023 15:06:36 GMT", "version": "v1" } ]
2023-09-29
[ [ "He", "Xiuxiu", "" ], [ "Chen", "Kuangcai", "" ], [ "Fang", "Ning", "" ], [ "Jiang", "Yi", "" ] ]
Cell shape has long been used to discern cell phenotypes and states, but the underlying premise has not been quantitatively tested. Here, we show that a single cell image can be used to discriminate its migration behavior by analyzing a large number of cell migration data in vitro. We analyzed a large number of two-dimensional cell migration images over time and found that the cell shape variation space has only six dimensions, and migration behavior can be determined by the coordinates of a single cell image in this 6-dimensional shape-space. We further show that this is possible because persistent cell migration is characterized by spatial-temporally coordinated protrusion and contraction, and a distribution signature in the shape-space. Our findings provide a quantitative underpinning for using cell morphology to differentiate cell dynamical behavior.
2009.01083
Alvaro Pastor
Alvaro Pastor
Memory systems of the brain
36 pages, 4 figures, draft
null
10.31219/OSF.IO/W6KN9
null
q-bio.NC
http://creativecommons.org/licenses/by-sa/4.0/
Humans have long been fascinated by how memories are formed, how they can be damaged or lost, or still seem vibrant after many years. Thus the search for the locus and organization of memory has had a long history, in which the notion that is is composed of distinct systems developed during the second half of the 20th century. A fundamental dichotomy between conscious and unconscious memory processes was first drawn based on evidences from the study of amnesiac subjects and the systematic experimental work with animals. The use of behavioral and neural measures together with imaging techniques have progressively led researchers to agree in the existence of a variety of neural architectures that support multiple memory systems. This article presents a historical lens with which to contextualize these idea on memory systems, and provides a current account for the multiple memory systems model.
[ { "created": "Tue, 1 Sep 2020 11:59:17 GMT", "version": "v1" } ]
2020-09-03
[ [ "Pastor", "Alvaro", "" ] ]
Humans have long been fascinated by how memories are formed, how they can be damaged or lost, or still seem vibrant after many years. Thus the search for the locus and organization of memory has had a long history, in which the notion that is is composed of distinct systems developed during the second half of the 20th century. A fundamental dichotomy between conscious and unconscious memory processes was first drawn based on evidences from the study of amnesiac subjects and the systematic experimental work with animals. The use of behavioral and neural measures together with imaging techniques have progressively led researchers to agree in the existence of a variety of neural architectures that support multiple memory systems. This article presents a historical lens with which to contextualize these idea on memory systems, and provides a current account for the multiple memory systems model.
1603.02007
Christian Scheppach
Christian Scheppach, Hugh P.C. Robinson
Fluctuation analysis in nonstationary conditions: single Ca channel current in cortical pyramidal neurons
null
Biophys. J., 5th Dec. 2017, 113(11):2383-2395
10.1016/j.bpj.2017.09.025
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fluctuation analysis is a method which allows measurement of the single channel current of ion channels even when it is too small to be resolved directly with the patch clamp technique. This is the case for voltage-gated Ca2+ channels (VGCCs). They are present in all mammalian central neurons, controlling presynaptic release of transmitter, postsynaptic signaling and synaptic integration. The amplitudes of their single channel currents in a physiological concentration of extracellular Ca2+, however, are small and not well determined. But measurement of this quantity is essential for estimating numbers of functional VGCCs in the membrane and the size of channel-associated Ca2+ signaling domains, and for understanding the stochastic nature of Ca2+ signaling. Here, we recorded the VGCC current in nucleated patches from layer 5 pyramidal neurons in rat neocortex, in physiological external Ca2+ (1-2 mM). The ensemble-averaging of current responses required for conventional fluctuation analysis proved impractical because of the rapid rundown of VGCC currents. We therefore developed a more robust method, using mean current fitting of individual current responses and band-pass filtering. Furthermore, voltage ramp stimulation proved useful. We validated the accuracy of the method by analyzing simulated data. At an external Ca2+ concentration of 1 mM, and a membrane potential of -20 mV, we found that the average single channel current amplitude was about 0.04 pA, increasing to 0.065 pA at 2 mM external Ca2+, and 0.12 pA at 5 mM. The relaxation time constant of the fluctuations was in the range 0.2-0.8 ms. The results are relevant to understanding the stochastic properties of dendritic Ca2+ spikes in neocortical layer 5 pyramidal neurons. With the reported method, single channel current amplitude of native VGCCs can be resolved accurately despite conditions of unstable rundown.
[ { "created": "Mon, 7 Mar 2016 11:14:19 GMT", "version": "v1" }, { "created": "Wed, 8 Jun 2016 09:49:36 GMT", "version": "v2" }, { "created": "Tue, 29 Aug 2017 12:55:07 GMT", "version": "v3" } ]
2017-12-13
[ [ "Scheppach", "Christian", "" ], [ "Robinson", "Hugh P. C.", "" ] ]
Fluctuation analysis is a method which allows measurement of the single channel current of ion channels even when it is too small to be resolved directly with the patch clamp technique. This is the case for voltage-gated Ca2+ channels (VGCCs). They are present in all mammalian central neurons, controlling presynaptic release of transmitter, postsynaptic signaling and synaptic integration. The amplitudes of their single channel currents in a physiological concentration of extracellular Ca2+, however, are small and not well determined. But measurement of this quantity is essential for estimating numbers of functional VGCCs in the membrane and the size of channel-associated Ca2+ signaling domains, and for understanding the stochastic nature of Ca2+ signaling. Here, we recorded the VGCC current in nucleated patches from layer 5 pyramidal neurons in rat neocortex, in physiological external Ca2+ (1-2 mM). The ensemble-averaging of current responses required for conventional fluctuation analysis proved impractical because of the rapid rundown of VGCC currents. We therefore developed a more robust method, using mean current fitting of individual current responses and band-pass filtering. Furthermore, voltage ramp stimulation proved useful. We validated the accuracy of the method by analyzing simulated data. At an external Ca2+ concentration of 1 mM, and a membrane potential of -20 mV, we found that the average single channel current amplitude was about 0.04 pA, increasing to 0.065 pA at 2 mM external Ca2+, and 0.12 pA at 5 mM. The relaxation time constant of the fluctuations was in the range 0.2-0.8 ms. The results are relevant to understanding the stochastic properties of dendritic Ca2+ spikes in neocortical layer 5 pyramidal neurons. With the reported method, single channel current amplitude of native VGCCs can be resolved accurately despite conditions of unstable rundown.
1407.4116
Min-Sheng Peng MSP
Ni-Ni Shi, Long Fan, Yong-Gang Yao, Min-Sheng Peng, Ya-Ping Zhang
Mitochondrial Genomes of Domestic Animals Need Scrutiny
129 Pages, 1 figure, 3 tables, and 5 supplementary materials
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
More than 1000 complete or near-complete mitochondrial DNA (mtDNA) sequences have been deposited in GenBank for eight common domestic animals (i.e. cattle, dog, goat, horse, pig, sheep, yak and chicken) and their close wild ancestors or relatives. Nevertheless, few efforts have been performed to evaluate the sequence data quality, which heavily impact the original conclusion. Herein, we conducted a phylogenetic survey of these complete or near-complete mtDNA sequences based on mtDNA haplogroup trees for the eight animals. We show that, errors due to artificial recombination, surplus of mutations, and phantom mutations, do exist in 14.5% (194/1342) of mtDNA sequences and shall be treated with wide caution. We propose some caveats for mtDNA studies of domestic animals in the future.
[ { "created": "Wed, 16 Jul 2014 01:18:22 GMT", "version": "v1" } ]
2014-07-17
[ [ "Shi", "Ni-Ni", "" ], [ "Fan", "Long", "" ], [ "Yao", "Yong-Gang", "" ], [ "Peng", "Min-Sheng", "" ], [ "Zhang", "Ya-Ping", "" ] ]
More than 1000 complete or near-complete mitochondrial DNA (mtDNA) sequences have been deposited in GenBank for eight common domestic animals (i.e. cattle, dog, goat, horse, pig, sheep, yak and chicken) and their close wild ancestors or relatives. Nevertheless, few efforts have been performed to evaluate the sequence data quality, which heavily impact the original conclusion. Herein, we conducted a phylogenetic survey of these complete or near-complete mtDNA sequences based on mtDNA haplogroup trees for the eight animals. We show that, errors due to artificial recombination, surplus of mutations, and phantom mutations, do exist in 14.5% (194/1342) of mtDNA sequences and shall be treated with wide caution. We propose some caveats for mtDNA studies of domestic animals in the future.
2105.13722
Archishman Raju
David A. Rand, Archishman Raju, Meritxell Saez, Francis Corson, and Eric D. Siggia
Geometry of Gene Regulatory Dynamics
null
null
10.1073/pnas.2109729118
null
q-bio.QM math.DS physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Embryonic development leads to the reproducible and ordered appearance of complexity from egg to adult. The successive differentiation of different cell types, that elaborates this complexity, result from the activity of gene networks and was likened by Waddington to a flow through a landscape in which valleys represent alternative fates. Geometric methods allow the formal representation of such landscapes and codify the types of behaviors that result from systems of differential equations. Results from Smale and coworkers imply that systems encompassing gene network models can be represented as potential gradients with a Riemann metric, justifying the Waddington metaphor. Here, we extend this representation to include parameter dependence and enumerate all 3-way cellular decisions realisable by tuning at most two parameters, which can be generalized to include spatial coordinates in a tissue. All diagrams of cell states vs model parameters are thereby enumerated. We unify a number of standard models for spatial pattern formation by expressing them in potential form. Turing systems appear non-potential yet in suitable variables the dynamics are low dimensional, potential, and a time independent embedding recovers the biological variables. Lateral inhibition is described by a saddle point with many unstable directions. A model for the patterning of the Drosophila eye appears as relaxation in a bistable potential. Geometric reasoning provides intuitive dynamic models for development that are well adapted to fit time-lapse data.
[ { "created": "Fri, 28 May 2021 10:40:39 GMT", "version": "v1" } ]
2022-05-11
[ [ "Rand", "David A.", "" ], [ "Raju", "Archishman", "" ], [ "Saez", "Meritxell", "" ], [ "Corson", "Francis", "" ], [ "Siggia", "Eric D.", "" ] ]
Embryonic development leads to the reproducible and ordered appearance of complexity from egg to adult. The successive differentiation of different cell types, that elaborates this complexity, result from the activity of gene networks and was likened by Waddington to a flow through a landscape in which valleys represent alternative fates. Geometric methods allow the formal representation of such landscapes and codify the types of behaviors that result from systems of differential equations. Results from Smale and coworkers imply that systems encompassing gene network models can be represented as potential gradients with a Riemann metric, justifying the Waddington metaphor. Here, we extend this representation to include parameter dependence and enumerate all 3-way cellular decisions realisable by tuning at most two parameters, which can be generalized to include spatial coordinates in a tissue. All diagrams of cell states vs model parameters are thereby enumerated. We unify a number of standard models for spatial pattern formation by expressing them in potential form. Turing systems appear non-potential yet in suitable variables the dynamics are low dimensional, potential, and a time independent embedding recovers the biological variables. Lateral inhibition is described by a saddle point with many unstable directions. A model for the patterning of the Drosophila eye appears as relaxation in a bistable potential. Geometric reasoning provides intuitive dynamic models for development that are well adapted to fit time-lapse data.
1907.03755
Ahmed BaniMustafa
Ahmed BaniMustafa and Nigel Hardy
Applications of a Novel Knowledge Discovery and Data Mining Process Model for Metabolomics
references information updated
null
null
null
q-bio.QM cs.DB cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work demonstrates the execution of a novel process model for knowledge discovery and data mining for metabolomics (MeKDDaM). It aims to illustrate MeKDDaM process model applicability using four different real-world applications and to highlight its strengths and unique features. The demonstrated applications provide coverage for metabolite profiling, target analysis, and metabolic fingerprinting. The data analysed in these applications were captured by chromatographic separation and mass spectrometry technique (LC-MS), Fourier transform infrared spectroscopy (FT-IR), and nuclear magnetic resonance spectroscopy (NMR) and involve the analysis of plant, animal, and human samples. The process was executed using both data-driven and hypothesis-driven data mining approaches in order to perform various data mining goals and tasks by applying a number of data mining techniques. The applications were selected to achieve a range of analytical goals and research questions and to provide coverage for metabolite profiling, target analysis, and metabolic fingerprinting using datasets that were captured by NMR, LC-MS, and FT-IR using samples of a plant, animal, and human origin. The process was applied using an implementation environment which was created in order to provide a computer-aided realisation of the process model execution.
[ { "created": "Tue, 9 Jul 2019 01:14:55 GMT", "version": "v1" }, { "created": "Tue, 30 Jul 2019 07:57:31 GMT", "version": "v2" } ]
2019-07-31
[ [ "BaniMustafa", "Ahmed", "" ], [ "Hardy", "Nigel", "" ] ]
This work demonstrates the execution of a novel process model for knowledge discovery and data mining for metabolomics (MeKDDaM). It aims to illustrate MeKDDaM process model applicability using four different real-world applications and to highlight its strengths and unique features. The demonstrated applications provide coverage for metabolite profiling, target analysis, and metabolic fingerprinting. The data analysed in these applications were captured by chromatographic separation and mass spectrometry technique (LC-MS), Fourier transform infrared spectroscopy (FT-IR), and nuclear magnetic resonance spectroscopy (NMR) and involve the analysis of plant, animal, and human samples. The process was executed using both data-driven and hypothesis-driven data mining approaches in order to perform various data mining goals and tasks by applying a number of data mining techniques. The applications were selected to achieve a range of analytical goals and research questions and to provide coverage for metabolite profiling, target analysis, and metabolic fingerprinting using datasets that were captured by NMR, LC-MS, and FT-IR using samples of a plant, animal, and human origin. The process was applied using an implementation environment which was created in order to provide a computer-aided realisation of the process model execution.
0705.0666
Tom Michoel
Tom Michoel, Steven Maere, Eric Bonnet, Anagha Joshi, Yvan Saeys, Tim Van den Bulcke, Koenraad Van Leemput, Piet van Remortel, Martin Kuiper, Kathleen Marchal, Yves Van de Peer
Validating module network learning algorithms using simulated data
13 pages, 6 figures + 2 pages, 2 figures supplementary information
BMC Bioinformatics 2007, 8(Suppl 2):S5
10.1186/1471-2105-8-S2-S5
null
q-bio.QM q-bio.MN
null
In recent years, several authors have used probabilistic graphical models to learn expression modules and their regulatory programs from gene expression data. Here, we demonstrate the use of the synthetic data generator SynTReN for the purpose of testing and comparing module network learning algorithms. We introduce a software package for learning module networks, called LeMoNe, which incorporates a novel strategy for learning regulatory programs. Novelties include the use of a bottom-up Bayesian hierarchical clustering to construct the regulatory programs, and the use of a conditional entropy measure to assign regulators to the regulation program nodes. Using SynTReN data, we test the performance of LeMoNe in a completely controlled situation and assess the effect of the methodological changes we made with respect to an existing software package, namely Genomica. Additionally, we assess the effect of various parameters, such as the size of the data set and the amount of noise, on the inference performance. Overall, application of Genomica and LeMoNe to simulated data sets gave comparable results. However, LeMoNe offers some advantages, one of them being that the learning process is considerably faster for larger data sets. Additionally, we show that the location of the regulators in the LeMoNe regulation programs and their conditional entropy may be used to prioritize regulators for functional validation, and that the combination of the bottom-up clustering strategy with the conditional entropy-based assignment of regulators improves the handling of missing or hidden regulators.
[ { "created": "Fri, 4 May 2007 16:18:59 GMT", "version": "v1" } ]
2007-11-15
[ [ "Michoel", "Tom", "" ], [ "Maere", "Steven", "" ], [ "Bonnet", "Eric", "" ], [ "Joshi", "Anagha", "" ], [ "Saeys", "Yvan", "" ], [ "Bulcke", "Tim Van den", "" ], [ "Van Leemput", "Koenraad", "" ], [ "van Remortel", "Piet", "" ], [ "Kuiper", "Martin", "" ], [ "Marchal", "Kathleen", "" ], [ "Van de Peer", "Yves", "" ] ]
In recent years, several authors have used probabilistic graphical models to learn expression modules and their regulatory programs from gene expression data. Here, we demonstrate the use of the synthetic data generator SynTReN for the purpose of testing and comparing module network learning algorithms. We introduce a software package for learning module networks, called LeMoNe, which incorporates a novel strategy for learning regulatory programs. Novelties include the use of a bottom-up Bayesian hierarchical clustering to construct the regulatory programs, and the use of a conditional entropy measure to assign regulators to the regulation program nodes. Using SynTReN data, we test the performance of LeMoNe in a completely controlled situation and assess the effect of the methodological changes we made with respect to an existing software package, namely Genomica. Additionally, we assess the effect of various parameters, such as the size of the data set and the amount of noise, on the inference performance. Overall, application of Genomica and LeMoNe to simulated data sets gave comparable results. However, LeMoNe offers some advantages, one of them being that the learning process is considerably faster for larger data sets. Additionally, we show that the location of the regulators in the LeMoNe regulation programs and their conditional entropy may be used to prioritize regulators for functional validation, and that the combination of the bottom-up clustering strategy with the conditional entropy-based assignment of regulators improves the handling of missing or hidden regulators.
1605.05291
Jing Wu
Jing Wu, Benjamin R. Shuman, Bingni W. Brunton, Katherine M. Steele, Jared D. Olson, Rajesh P.N. Rao, Jeffrey G. Ojemann
Multistep Model for Predicting Upper-Limb 3D Isometric Force Application from Pre-Movement Electrocorticographic Features
4 pages, 3 figures, accepted to EMBC 2016 (38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society)
null
10.1109/EMBC.2016.7591010
null
q-bio.NC cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural correlates of movement planning onset and direction may be present in human electrocorticography in the signal dynamics of both motor and non-motor cortical regions. We use a three-stage model of jPCA reduced-rank hidden Markov model (jPCA-RR-HMM), regularized shrunken-centroid discriminant analysis (RDA), and LASSO regression to extract direction-sensitive planning information and movement onset in an upper-limb 3D isometric force task in a human subject. This mode achieves a relatively high true positive force-onset prediction rate of 60% within 250ms, and an above-chance 36% accuracy (17% chance) in predicting one of six planned 3D directions of isometric force using pre-movement signals. We also find direction-distinguishing information up to 400ms before force onset in the pre-movement signals, captured by electrodes placed over the limb-ipsilateral dorsal premotor regions. This approach can contribute to more accurate decoding of higher-level movement goals, at earlier timescales, and inform sensor placement. Our results also contribute to further understanding of the spatiotemporal features of human motor planning.
[ { "created": "Tue, 17 May 2016 19:14:03 GMT", "version": "v1" } ]
2016-10-26
[ [ "Wu", "Jing", "" ], [ "Shuman", "Benjamin R.", "" ], [ "Brunton", "Bingni W.", "" ], [ "Steele", "Katherine M.", "" ], [ "Olson", "Jared D.", "" ], [ "Rao", "Rajesh P. N.", "" ], [ "Ojemann", "Jeffrey G.", "" ] ]
Neural correlates of movement planning onset and direction may be present in human electrocorticography in the signal dynamics of both motor and non-motor cortical regions. We use a three-stage model of jPCA reduced-rank hidden Markov model (jPCA-RR-HMM), regularized shrunken-centroid discriminant analysis (RDA), and LASSO regression to extract direction-sensitive planning information and movement onset in an upper-limb 3D isometric force task in a human subject. This mode achieves a relatively high true positive force-onset prediction rate of 60% within 250ms, and an above-chance 36% accuracy (17% chance) in predicting one of six planned 3D directions of isometric force using pre-movement signals. We also find direction-distinguishing information up to 400ms before force onset in the pre-movement signals, captured by electrodes placed over the limb-ipsilateral dorsal premotor regions. This approach can contribute to more accurate decoding of higher-level movement goals, at earlier timescales, and inform sensor placement. Our results also contribute to further understanding of the spatiotemporal features of human motor planning.
1801.01651
Leonardo L. Gollo
Penelope Kale, Andrew Zalesky, Leonardo L. Gollo
Estimating the impact of structural directionality: How reliable are undirected connectomes?
29 pages, 6 figures, 9 supplementary figures, 4 supplementary tables
Network Neuroscience (2018)
10.1162/NETN_a_00040
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Directionality is a fundamental feature of network connections. Most structural brain networks are intrinsically directed because of the nature of chemical synapses, which comprise most neuronal connections. Due to limitations of non-invasive imaging techniques, the directionality of connections between structurally connected regions of the human brain cannot be confirmed. Hence, connections are represented as undirected, and it is still unknown how this lack of directionality affects brain network topology. Using six directed brain networks from different species and parcellations (cat, mouse, C. elegans, and three macaque networks), we estimate the inaccuracies in network measures (degree, betweenness, clustering coefficient, path length, global efficiency, participation index, and small worldness) associated with the removal of the directionality of connections. We employ three different methods to render directed brain networks undirected: (i) remove uni-directional connections, (ii) add reciprocal connections, and (iii) combine equal numbers of removed and added uni-directional connections. We quantify the extent of inaccuracy in network measures introduced through neglecting connection directionality for individual nodes and across the network. We find that the coarse division between core and peripheral nodes remains accurate for undirected networks. However, hub nodes differ considerably when directionality is neglected. Comparing the different methods to generate undirected networks from directed ones, we generally find that the addition of reciprocal connections (false positives) causes larger errors in graph-theoretic measures than the removal of the same number of directed connections (false negatives). These findings suggest that directionality plays an essential role in shaping brain networks and highlight some limitations of undirected connectomes.
[ { "created": "Fri, 5 Jan 2018 07:27:47 GMT", "version": "v1" } ]
2018-01-19
[ [ "Kale", "Penelope", "" ], [ "Zalesky", "Andrew", "" ], [ "Gollo", "Leonardo L.", "" ] ]
Directionality is a fundamental feature of network connections. Most structural brain networks are intrinsically directed because of the nature of chemical synapses, which comprise most neuronal connections. Due to limitations of non-invasive imaging techniques, the directionality of connections between structurally connected regions of the human brain cannot be confirmed. Hence, connections are represented as undirected, and it is still unknown how this lack of directionality affects brain network topology. Using six directed brain networks from different species and parcellations (cat, mouse, C. elegans, and three macaque networks), we estimate the inaccuracies in network measures (degree, betweenness, clustering coefficient, path length, global efficiency, participation index, and small worldness) associated with the removal of the directionality of connections. We employ three different methods to render directed brain networks undirected: (i) remove uni-directional connections, (ii) add reciprocal connections, and (iii) combine equal numbers of removed and added uni-directional connections. We quantify the extent of inaccuracy in network measures introduced through neglecting connection directionality for individual nodes and across the network. We find that the coarse division between core and peripheral nodes remains accurate for undirected networks. However, hub nodes differ considerably when directionality is neglected. Comparing the different methods to generate undirected networks from directed ones, we generally find that the addition of reciprocal connections (false positives) causes larger errors in graph-theoretic measures than the removal of the same number of directed connections (false negatives). These findings suggest that directionality plays an essential role in shaping brain networks and highlight some limitations of undirected connectomes.
1402.0632
Binay Panda
Saurabh Gupta, Sanjoy Chaudhury 'and' Binay Panda
MUSIC: A Hybrid Computing Environment for Burrows-Wheeler Alignment for Massive Amount of Short Read Sequence Data
4 Pages, 1 Table, 4 Figures, Accepted in MECBME, 2014 for presentation, To be indexed in IEEExPlore
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High-throughput DNA sequencers are becoming indispensable in our understanding of diseases at molecular level, in marker-assisted selection in agriculture and in microbial genetics research. These sequencing instruments produce enormous amount of data (often terabytes of raw data in a month) that requires efficient analysis, management and interpretation. The commonly used sequencing instrument today produces billions of short reads (upto 150 bases) from each run. The first step in the data analysis step is alignment of these short reads to the reference genome of choice. There are different open source algorithms available for sequence alignment to the reference genome. These tools normally have a high computational overhead, both in terms of number of processors and memory. Here, we propose a hybrid-computing environment called MUSIC (Mapping USIng hybrid Computing) for one of the most popular open source sequence alignment algorithm, BWA, using accelerators that show significant improvement in speed over the serial code.
[ { "created": "Tue, 4 Feb 2014 06:32:42 GMT", "version": "v1" } ]
2014-02-05
[ [ "Gupta", "Saurabh", "" ], [ "Panda", "Sanjoy Chaudhury 'and' Binay", "" ] ]
High-throughput DNA sequencers are becoming indispensable in our understanding of diseases at molecular level, in marker-assisted selection in agriculture and in microbial genetics research. These sequencing instruments produce enormous amount of data (often terabytes of raw data in a month) that requires efficient analysis, management and interpretation. The commonly used sequencing instrument today produces billions of short reads (upto 150 bases) from each run. The first step in the data analysis step is alignment of these short reads to the reference genome of choice. There are different open source algorithms available for sequence alignment to the reference genome. These tools normally have a high computational overhead, both in terms of number of processors and memory. Here, we propose a hybrid-computing environment called MUSIC (Mapping USIng hybrid Computing) for one of the most popular open source sequence alignment algorithm, BWA, using accelerators that show significant improvement in speed over the serial code.
1809.00895
Marta Diaz Ms
Marta Diaz-delCastillo, Soren H. Christiansen, Camilla K. Appel, Sarah Falka, David P. D. Woldbye and Anne-Marie Heegaarda
Neuropeptide Y is up-regulated and induces antinociception in cancer-induced bone pain
23 pages, 4 figures
Neuroscience. 2018 Aug 1;384:111-119. Epub 2018 May 29. PMID: 29852245
10.1016/j.neuroscience.2018.05.025
null
q-bio.NC q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pain remains a major concern in patients suffering from metastatic cancer to the bone and more knowledge of the condition, as well as novel treatment avenues, are called for. Neuropeptide Y (NPY) is a highly conserved peptide that appears to play a central role in nociceptive signaling in inflammatory and neuropathic pain. However, little is known about the peptide in cancer-induced bone pain. Here, we evaluate the role of spinal NPY in the MRMT-1 rat model of cancer-induced bone pain. Our studies revealed an up-regulation of NPY-immunoreactivity in the dorsal horn of cancer-bearing rats 17 days after inoculation, which could be a compensatory antinociceptive response. Consistent with this interpretation, intrathecal administration of NPY to rats with cancer-induced bone pain caused a reduction in nociceptive behaviors that lasted up to 150 min. This effect was diminished by both Y1 (BIBO3304) and Y2 (BIIE0246) receptor antagonists, indicating that both receptors participate in mediating the antinociceptive effect of NPY. Y1 and Y2 receptor binding in the spinal cord was unchanged in the cancer state as compared to sham-operated rats, consistent with the notion that increased NPY results in a net antinociceptive effect in the MRMT-1 model. In conclusion, the data indicate that NPY is involved in the spinal nociceptive signaling of cancer-induced bone pain and could be a new therapeutic target for patients with this condition.
[ { "created": "Tue, 4 Sep 2018 11:28:34 GMT", "version": "v1" } ]
2018-09-10
[ [ "Diaz-delCastillo", "Marta", "" ], [ "Christiansen", "Soren H.", "" ], [ "Appel", "Camilla K.", "" ], [ "Falka", "Sarah", "" ], [ "Woldbye", "David P. D.", "" ], [ "Heegaarda", "Anne-Marie", "" ] ]
Pain remains a major concern in patients suffering from metastatic cancer to the bone and more knowledge of the condition, as well as novel treatment avenues, are called for. Neuropeptide Y (NPY) is a highly conserved peptide that appears to play a central role in nociceptive signaling in inflammatory and neuropathic pain. However, little is known about the peptide in cancer-induced bone pain. Here, we evaluate the role of spinal NPY in the MRMT-1 rat model of cancer-induced bone pain. Our studies revealed an up-regulation of NPY-immunoreactivity in the dorsal horn of cancer-bearing rats 17 days after inoculation, which could be a compensatory antinociceptive response. Consistent with this interpretation, intrathecal administration of NPY to rats with cancer-induced bone pain caused a reduction in nociceptive behaviors that lasted up to 150 min. This effect was diminished by both Y1 (BIBO3304) and Y2 (BIIE0246) receptor antagonists, indicating that both receptors participate in mediating the antinociceptive effect of NPY. Y1 and Y2 receptor binding in the spinal cord was unchanged in the cancer state as compared to sham-operated rats, consistent with the notion that increased NPY results in a net antinociceptive effect in the MRMT-1 model. In conclusion, the data indicate that NPY is involved in the spinal nociceptive signaling of cancer-induced bone pain and could be a new therapeutic target for patients with this condition.
1210.8415
Markus Dahlem
Markus A. Dahlem and Jan Tusch
Predicted selective increase of cortical magnification due to cortical folding
22 pages, 6 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The cortical magnification matrix M is introduced founded on a notion similar to that of the scalar cortical magnification factor M. Unlike M, this matrix is suitable to describe anisotropy in cortical magnification, which is of particular interest in the highly gyrified human cerebral cortex. The advantage of our tensor method over other surface-based 3D methods to explore cortical morphometry is that M expresses cortical quantities in the corresponding sensory space. It allows us to investigate the spatial relation between sensory function and anatomical structure. To this end, we consider the calcarine sulcus (CS) as an anatomical landmark for the primary visual cortex (V1). We found that a stereotypically formed 3D model of V1 compared to a flat model explains an excess of cortical tissue for the representation of visual information coming from the horizon of the visual field. This suggests that the intrinsic geometry of this sulcus is adapted to encephalize a particular function along the horizon. Since visual functions are assumed to be M-scaled, cortical folding can serve as an anatomical basis for increased functionality on the horizon similar to a retinal specialization known as visual streak, which is found in animals with lower encephalization. Thus, the gain of surface area by cortical folding links anatomical structure to cortical function in a previously unrecognized way, which may guide sulci development.
[ { "created": "Wed, 31 Oct 2012 17:47:21 GMT", "version": "v1" } ]
2012-11-01
[ [ "Dahlem", "Markus A.", "" ], [ "Tusch", "Jan", "" ] ]
The cortical magnification matrix M is introduced founded on a notion similar to that of the scalar cortical magnification factor M. Unlike M, this matrix is suitable to describe anisotropy in cortical magnification, which is of particular interest in the highly gyrified human cerebral cortex. The advantage of our tensor method over other surface-based 3D methods to explore cortical morphometry is that M expresses cortical quantities in the corresponding sensory space. It allows us to investigate the spatial relation between sensory function and anatomical structure. To this end, we consider the calcarine sulcus (CS) as an anatomical landmark for the primary visual cortex (V1). We found that a stereotypically formed 3D model of V1 compared to a flat model explains an excess of cortical tissue for the representation of visual information coming from the horizon of the visual field. This suggests that the intrinsic geometry of this sulcus is adapted to encephalize a particular function along the horizon. Since visual functions are assumed to be M-scaled, cortical folding can serve as an anatomical basis for increased functionality on the horizon similar to a retinal specialization known as visual streak, which is found in animals with lower encephalization. Thus, the gain of surface area by cortical folding links anatomical structure to cortical function in a previously unrecognized way, which may guide sulci development.
1605.01592
Alberto Ferrari
Alberto Ferrari and Mario Comelli
A comparison of methods for the analysis of binomial proportion data in behavioral research
26 pages, 4 figures, 3 tables
null
null
null
q-bio.QM q-bio.NC stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In behavioral and psychiatric research, data consisting of a per-subject proportion of "successes" and "failures" over a finite number of trials often arise. This kind of clustered binary data are usually non-normally distributed, which can cause issues with parameter estimation and predictions if the usual general linear model is applied and sample size is small. Here we studied the performances of some of the available analytic methods applicable to the analysis of proportion data; namely linear regression, Poisson regression, beta-binomial regression and Generalized Linear Mixed Models (GLMMs). We report the conclusions from a simulation study evaluating power and Type I error rates of these models in scenarios akin to those met by behavioral researchers and differing in sample size, cluster size and fixed effects parameters; plus, we describe results from the application of these methods on data from two real behavioral experiments. Our results show that, while GLMMs and beta-binomial regression are powerful instruments for the analysis of clustered binary outcomes, linear approximation can still provide reliable hypothesis testing in this context. Poisson regression, on the other hand, can suffer heavily from model misspecification when used to model proportion data. We conclude providing some guidelines for the choice of appropriate analytical instruments, sample and cluster size depending on the conditions of the experiment.
[ { "created": "Thu, 5 May 2016 13:50:49 GMT", "version": "v1" }, { "created": "Fri, 6 May 2016 15:47:12 GMT", "version": "v2" } ]
2016-05-09
[ [ "Ferrari", "Alberto", "" ], [ "Comelli", "Mario", "" ] ]
In behavioral and psychiatric research, data consisting of a per-subject proportion of "successes" and "failures" over a finite number of trials often arise. This kind of clustered binary data are usually non-normally distributed, which can cause issues with parameter estimation and predictions if the usual general linear model is applied and sample size is small. Here we studied the performances of some of the available analytic methods applicable to the analysis of proportion data; namely linear regression, Poisson regression, beta-binomial regression and Generalized Linear Mixed Models (GLMMs). We report the conclusions from a simulation study evaluating power and Type I error rates of these models in scenarios akin to those met by behavioral researchers and differing in sample size, cluster size and fixed effects parameters; plus, we describe results from the application of these methods on data from two real behavioral experiments. Our results show that, while GLMMs and beta-binomial regression are powerful instruments for the analysis of clustered binary outcomes, linear approximation can still provide reliable hypothesis testing in this context. Poisson regression, on the other hand, can suffer heavily from model misspecification when used to model proportion data. We conclude providing some guidelines for the choice of appropriate analytical instruments, sample and cluster size depending on the conditions of the experiment.
1907.11280
Breno de Oliveira Ferraz
P.P. Avelino, B.F. de Oliveira, and R.S. Trintin
Predominance of the weakest species in Lotka-Volterra and May-Leonard implementations of the rock-paper-scissors model
7 pages and 6 figures
Phys. Rev. E 100, 042209 (2019)
10.1103/PhysRevE.100.042209
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We revisit the problem of the predominance of the 'weakest' species in the context of Lotka-Volterra and May-Leonard implementations of a spatial stochastic rock-paper-scissors model in which one of the species has its predation probability reduced by $0 < \mathcal{P}_w < 1$. We show that,despite the different population dynamics and spatial patterns, these two implementations lead to qualitatively similar results for the late time values of the relative abundances of the three species (as a function of $\mathcal{P}_w$), as long as the simulation lattices are sufficiently large for coexistence to prevail --- the 'weakest' species generally having an advantage over the others (specially over its predator). However, for smaller simulation lattices, we find that the relatively large oscillations at the initial stages of simulations with random initial conditions may result in a significant dependence of the probability of species survival on the lattice size and total simulation time.
[ { "created": "Thu, 25 Jul 2019 19:14:46 GMT", "version": "v1" } ]
2019-10-16
[ [ "Avelino", "P. P.", "" ], [ "de Oliveira", "B. F.", "" ], [ "Trintin", "R. S.", "" ] ]
We revisit the problem of the predominance of the 'weakest' species in the context of Lotka-Volterra and May-Leonard implementations of a spatial stochastic rock-paper-scissors model in which one of the species has its predation probability reduced by $0 < \mathcal{P}_w < 1$. We show that,despite the different population dynamics and spatial patterns, these two implementations lead to qualitatively similar results for the late time values of the relative abundances of the three species (as a function of $\mathcal{P}_w$), as long as the simulation lattices are sufficiently large for coexistence to prevail --- the 'weakest' species generally having an advantage over the others (specially over its predator). However, for smaller simulation lattices, we find that the relatively large oscillations at the initial stages of simulations with random initial conditions may result in a significant dependence of the probability of species survival on the lattice size and total simulation time.
1609.06335
Anthony Gitter
Anthony Gitter, Furong Huang, Ragupathyraj Valluvan, Ernest Fraenkel, Animashree Anandkumar
Unsupervised learning of transcriptional regulatory networks via latent tree graphical models
37 pages, 9 figures
null
null
null
q-bio.MN cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene expression is a readily-observed quantification of transcriptional activity and cellular state that enables the recovery of the relationships between regulators and their target genes. Reconstructing transcriptional regulatory networks from gene expression data is a problem that has attracted much attention, but previous work often makes the simplifying (but unrealistic) assumption that regulator activity is represented by mRNA levels. We use a latent tree graphical model to analyze gene expression without relying on transcription factor expression as a proxy for regulator activity. The latent tree model is a type of Markov random field that includes both observed gene variables and latent (hidden) variables, which factorize on a Markov tree. Through efficient unsupervised learning approaches, we determine which groups of genes are co-regulated by hidden regulators and the activity levels of those regulators. Post-processing annotates many of these discovered latent variables as specific transcription factors or groups of transcription factors. Other latent variables do not necessarily represent physical regulators but instead reveal hidden structure in the gene expression such as shared biological function. We apply the latent tree graphical model to a yeast stress response dataset. In addition to novel predictions, such as condition-specific binding of the transcription factor Msn4, our model recovers many known aspects of the yeast regulatory network. These include groups of co-regulated genes, condition-specific regulator activity, and combinatorial regulation among transcription factors. The latent tree graphical model is a general approach for analyzing gene expression data that requires no prior knowledge of which possible regulators exist, regulator activity, or where transcription factors physically bind.
[ { "created": "Tue, 20 Sep 2016 20:14:15 GMT", "version": "v1" } ]
2016-09-22
[ [ "Gitter", "Anthony", "" ], [ "Huang", "Furong", "" ], [ "Valluvan", "Ragupathyraj", "" ], [ "Fraenkel", "Ernest", "" ], [ "Anandkumar", "Animashree", "" ] ]
Gene expression is a readily-observed quantification of transcriptional activity and cellular state that enables the recovery of the relationships between regulators and their target genes. Reconstructing transcriptional regulatory networks from gene expression data is a problem that has attracted much attention, but previous work often makes the simplifying (but unrealistic) assumption that regulator activity is represented by mRNA levels. We use a latent tree graphical model to analyze gene expression without relying on transcription factor expression as a proxy for regulator activity. The latent tree model is a type of Markov random field that includes both observed gene variables and latent (hidden) variables, which factorize on a Markov tree. Through efficient unsupervised learning approaches, we determine which groups of genes are co-regulated by hidden regulators and the activity levels of those regulators. Post-processing annotates many of these discovered latent variables as specific transcription factors or groups of transcription factors. Other latent variables do not necessarily represent physical regulators but instead reveal hidden structure in the gene expression such as shared biological function. We apply the latent tree graphical model to a yeast stress response dataset. In addition to novel predictions, such as condition-specific binding of the transcription factor Msn4, our model recovers many known aspects of the yeast regulatory network. These include groups of co-regulated genes, condition-specific regulator activity, and combinatorial regulation among transcription factors. The latent tree graphical model is a general approach for analyzing gene expression data that requires no prior knowledge of which possible regulators exist, regulator activity, or where transcription factors physically bind.
2106.11698
Zheng Zhao
Zheng Zhao and Philip E. Bourne
Advance in Reversible Covalent Kinase Inhibitors
55 pages; 12 figures
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by-nc-nd/4.0/
Reversible covalent kinase inhibitors (RCKIs) are a class of novel kinase inhibitors attracting increasing attention because they simultaneously show the selectivity of covalent kinase inhibitors, yet avoid permanent protein-modification-induced adverse effects. Over the last decade, RCKIs have been reported to target different kinases, including atypical kinases. Currently, three RCKIs are undergoing clinical trials to treat specific diseases, for example, Pemphigus, an autoimmune disorder. In this perspective, first, RCKIs are systematically summarized, including characteristics of electrophilic groups, chemical scaffolds, nucleophilic residues, and binding modes. Second, we provide insights into privileged electrophiles, the distribution of nucleophiles and hence effective design strategies for RCKIs. Finally, we provide a brief perspective on future design strategies for RCKIs, including those that target proteins other than kinases.
[ { "created": "Tue, 22 Jun 2021 12:02:30 GMT", "version": "v1" }, { "created": "Wed, 22 Feb 2023 12:51:03 GMT", "version": "v2" } ]
2023-02-23
[ [ "Zhao", "Zheng", "" ], [ "Bourne", "Philip E.", "" ] ]
Reversible covalent kinase inhibitors (RCKIs) are a class of novel kinase inhibitors attracting increasing attention because they simultaneously show the selectivity of covalent kinase inhibitors, yet avoid permanent protein-modification-induced adverse effects. Over the last decade, RCKIs have been reported to target different kinases, including atypical kinases. Currently, three RCKIs are undergoing clinical trials to treat specific diseases, for example, Pemphigus, an autoimmune disorder. In this perspective, first, RCKIs are systematically summarized, including characteristics of electrophilic groups, chemical scaffolds, nucleophilic residues, and binding modes. Second, we provide insights into privileged electrophiles, the distribution of nucleophiles and hence effective design strategies for RCKIs. Finally, we provide a brief perspective on future design strategies for RCKIs, including those that target proteins other than kinases.
0709.0679
Joshua Shaevitz
Joshua W. Shaevitz, Daniel A. Fletcher
Curvature and torsion in growing actin networks
null
null
10.1088/1478-3975/5/2/026006
null
q-bio.BM physics.bio-ph q-bio.CB
null
Intracellular pathogens such as Listeria monocytogenes and Rickettsia rickettsii move within a host cell by polymerizing a comet-tail of actin fibers that ultimately pushes the cell forward. This dense network of cross-linked actin polymers typically exhibits a striking curvature that causes bacteria to move in gently looping paths. Theoretically, tail curvature has been linked to details of motility by considering force and torque balances from a finite number of polymerizing filaments. Here we track beads coated with a prokaryotic activator of actin polymerization in three dimensions to directly quantify the curvature and torsion of bead motility paths. We find that bead paths are more likely to have low rather than high curvature at any given time. Furthermore, path curvature changes very slowly in time, with an autocorrelation decay time of 200 seconds. Paths with a small radius of curvature, therefore, remain so for an extended period resulting in loops when confined to two dimensions. When allowed to explore a 3D space, path loops are less evident. Finally, we quantify the torsion in the bead paths and show that beads do not exhibit a significant left- or right-handed bias to their motion in 3D. These results suggest that paths of actin-propelled objects may be attributed to slow changes in curvature rather than a fixed torque.
[ { "created": "Wed, 5 Sep 2007 15:48:38 GMT", "version": "v1" } ]
2009-11-13
[ [ "Shaevitz", "Joshua W.", "" ], [ "Fletcher", "Daniel A.", "" ] ]
Intracellular pathogens such as Listeria monocytogenes and Rickettsia rickettsii move within a host cell by polymerizing a comet-tail of actin fibers that ultimately pushes the cell forward. This dense network of cross-linked actin polymers typically exhibits a striking curvature that causes bacteria to move in gently looping paths. Theoretically, tail curvature has been linked to details of motility by considering force and torque balances from a finite number of polymerizing filaments. Here we track beads coated with a prokaryotic activator of actin polymerization in three dimensions to directly quantify the curvature and torsion of bead motility paths. We find that bead paths are more likely to have low rather than high curvature at any given time. Furthermore, path curvature changes very slowly in time, with an autocorrelation decay time of 200 seconds. Paths with a small radius of curvature, therefore, remain so for an extended period resulting in loops when confined to two dimensions. When allowed to explore a 3D space, path loops are less evident. Finally, we quantify the torsion in the bead paths and show that beads do not exhibit a significant left- or right-handed bias to their motion in 3D. These results suggest that paths of actin-propelled objects may be attributed to slow changes in curvature rather than a fixed torque.
1012.2730
Christian Brouder
Vahid Salari and Christian Brouder
Comment on "Delayed luminescence of biological systems in terms of coherent states" [Phys. Lett. A 293 (2002) 93]
2 pages, no figure
Phys. Lett. A 375 (2011) 2531-2
10.1016/j.physleta.2011.05.017
null
q-bio.QM quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Popp and Yan [F. A. Popp, Y. Yan, Phys. Lett. A 293 (2002) 93] proposed a model for delayed luminescence based on a single time-dependent coherent state. We show that the general solution of their model corresponds to a luminescence that is a linear function of time. Therefore, their model is not compatible with any measured delayed luminescence. Moreover, the functions that they use to describe the oscillatory behaviour of delayed luminescence are not solutions of the coupling equations to be solved.
[ { "created": "Mon, 13 Dec 2010 13:55:34 GMT", "version": "v1" } ]
2011-06-22
[ [ "Salari", "Vahid", "" ], [ "Brouder", "Christian", "" ] ]
Popp and Yan [F. A. Popp, Y. Yan, Phys. Lett. A 293 (2002) 93] proposed a model for delayed luminescence based on a single time-dependent coherent state. We show that the general solution of their model corresponds to a luminescence that is a linear function of time. Therefore, their model is not compatible with any measured delayed luminescence. Moreover, the functions that they use to describe the oscillatory behaviour of delayed luminescence are not solutions of the coupling equations to be solved.
2301.04542
Maria Virginia Bolelli
Maria Virginia Bolelli, Giovanna Citti, Alessandro Sarti, Steven W. Zucker
Good continuation in 3D: the neurogeometry of stereo vision
null
null
null
null
q-bio.NC math.DG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Classical good continuation for image curves is based on $2D$ position and orientation. It is supported by the columnar organization of cortex, by psychophysical experiments, and by rich models of (differential) geometry. Here we extend good continuation to stereo. We introduce a neurogeometric model, in which the parametrizations involve both spatial and orientation disparities. Our model provides insight into the neurobiology, suggesting an implicit organization for neural interactions and a well-defined $3D$ association field. Our model sheds light on the computations underlying the correspondence problem, and illustrates how good continuation in the world generalizes good continuation in the plane.
[ { "created": "Wed, 11 Jan 2023 16:12:49 GMT", "version": "v1" } ]
2023-01-12
[ [ "Bolelli", "Maria Virginia", "" ], [ "Citti", "Giovanna", "" ], [ "Sarti", "Alessandro", "" ], [ "Zucker", "Steven W.", "" ] ]
Classical good continuation for image curves is based on $2D$ position and orientation. It is supported by the columnar organization of cortex, by psychophysical experiments, and by rich models of (differential) geometry. Here we extend good continuation to stereo. We introduce a neurogeometric model, in which the parametrizations involve both spatial and orientation disparities. Our model provides insight into the neurobiology, suggesting an implicit organization for neural interactions and a well-defined $3D$ association field. Our model sheds light on the computations underlying the correspondence problem, and illustrates how good continuation in the world generalizes good continuation in the plane.
1404.5827
Ulrich S. Schwarz
Heinrich C. R. Klein and Ulrich S. Schwarz (Heidelberg University)
Studying protein assembly with reversible Brownian dynamics of patchy particles
Revtex, 41 pages, 9 figures, includes some small corrections compared to first version
J. Chem. Phys. 140, 184112 (2014)
10.1063/1.4873708
null
q-bio.SC cond-mat.soft cond-mat.stat-mech q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Assembly of protein complexes like virus shells, the centriole, the nuclear pore complex or the actin cytoskeleton is strongly determined by their spatial structure. Moreover it is becoming increasingly clear that the reversible nature of protein assembly is also an essential element for their biological function. Here we introduce a computational approach for the Brownian dynamics of patchy particles with anisotropic assemblies and fully reversible reactions. Different particles stochastically associate and dissociate with microscopic reaction rates depending on their relative spatial positions. The translational and rotational diffusive properties of all protein complexes are evaluated on-the-fly. Because we focus on reversible assembly, we introduce a scheme which ensures detailed balance for patchy particles. We then show how the macroscopic rates follow from the microscopic ones. As an instructive example, we study the assembly of a pentameric ring structure, for which we find excellent agreement between simulation results and a macroscopic kinetic description without any adjustable parameters. This demonstrates that our approach correctly accounts for both the diffusive and reactive processes involved in protein assembly.
[ { "created": "Wed, 23 Apr 2014 14:10:32 GMT", "version": "v1" }, { "created": "Mon, 12 May 2014 15:51:22 GMT", "version": "v2" } ]
2014-05-13
[ [ "Klein", "Heinrich C. R.", "", "Heidelberg University" ], [ "Schwarz", "Ulrich S.", "", "Heidelberg University" ] ]
Assembly of protein complexes like virus shells, the centriole, the nuclear pore complex or the actin cytoskeleton is strongly determined by their spatial structure. Moreover it is becoming increasingly clear that the reversible nature of protein assembly is also an essential element for their biological function. Here we introduce a computational approach for the Brownian dynamics of patchy particles with anisotropic assemblies and fully reversible reactions. Different particles stochastically associate and dissociate with microscopic reaction rates depending on their relative spatial positions. The translational and rotational diffusive properties of all protein complexes are evaluated on-the-fly. Because we focus on reversible assembly, we introduce a scheme which ensures detailed balance for patchy particles. We then show how the macroscopic rates follow from the microscopic ones. As an instructive example, we study the assembly of a pentameric ring structure, for which we find excellent agreement between simulation results and a macroscopic kinetic description without any adjustable parameters. This demonstrates that our approach correctly accounts for both the diffusive and reactive processes involved in protein assembly.
1502.05331
Nicholas Putnam
Nicholas H. Putnam, Brendan O'Connell, Jonathan C. Stites, Brandon J. Rice, Andrew Fields, Paul D. Hartley, Charles W. Sugnet, David Haussler, Daniel S. Rokhsar, Richard E. Green
Chromosome-scale shotgun assembly using an in vitro method for long-range linkage
null
null
10.1101/gr.193474.115
null
q-bio.GN q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Long-range and highly accurate de novo assembly from short-read data is one of the most pressing challenges in genomics. Recently, it has been shown that read pairs generated by proximity ligation of DNA in chromatin of living tissue can address this problem. These data dramatically increase the scaffold contiguity of assemblies and provide haplotype phasing information. Here, we describe a simpler approach ("Chicago") based on in vitro reconstituted chromatin. We generated two Chicago datasets with human DNA and used a new software pipeline ("HiRise") to construct a highly accurate de novo assembly and scaffolding of a human genome with scaffold N50 of 30 Mb. We also demonstrated the utility of Chicago for improving existing assemblies by re-assembling and scaffolding the genome of the American alligator. With a single library and one lane of Illumina HiSeq sequencing, we increased the scaffold N50 of the American alligator from 508 kb to 10 Mb. Our method uses established molecular biology procedures and can be used to analyze any genome, as it requires only about 5 micrograms of DNA as the starting material.
[ { "created": "Wed, 18 Feb 2015 18:29:50 GMT", "version": "v1" } ]
2016-02-11
[ [ "Putnam", "Nicholas H.", "" ], [ "O'Connell", "Brendan", "" ], [ "Stites", "Jonathan C.", "" ], [ "Rice", "Brandon J.", "" ], [ "Fields", "Andrew", "" ], [ "Hartley", "Paul D.", "" ], [ "Sugnet", "Charles W.", "" ], [ "Haussler", "David", "" ], [ "Rokhsar", "Daniel S.", "" ], [ "Green", "Richard E.", "" ] ]
Long-range and highly accurate de novo assembly from short-read data is one of the most pressing challenges in genomics. Recently, it has been shown that read pairs generated by proximity ligation of DNA in chromatin of living tissue can address this problem. These data dramatically increase the scaffold contiguity of assemblies and provide haplotype phasing information. Here, we describe a simpler approach ("Chicago") based on in vitro reconstituted chromatin. We generated two Chicago datasets with human DNA and used a new software pipeline ("HiRise") to construct a highly accurate de novo assembly and scaffolding of a human genome with scaffold N50 of 30 Mb. We also demonstrated the utility of Chicago for improving existing assemblies by re-assembling and scaffolding the genome of the American alligator. With a single library and one lane of Illumina HiSeq sequencing, we increased the scaffold N50 of the American alligator from 508 kb to 10 Mb. Our method uses established molecular biology procedures and can be used to analyze any genome, as it requires only about 5 micrograms of DNA as the starting material.
1410.5930
Marco Brigham
Marco Brigham and Alain Destexhe
Non-stationary filtered shot noise processes and applications to neuronal membranes
18 pages, 13 figures
Physical Review E 91: 062102, 2015
10.1103/PhysRevE.91.062102
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Filtered shot noise processes have proven to be very effective in modelling the evolution of systems exposed to stochastic shot noise sources, and have been applied to a wide variety of fields ranging from electronics through biology. In particular, they can model the membrane potential Vm of neurons driven by stochastic input, where these filtered processes are able to capture the non-stationary characteristics of Vm fluctuations in response to pre-synaptic input with variable rate. In this paper, we apply the general framework of Poisson Point Processes transformations to analyse these systems in the general case of variable input rate. We obtain exact analytic expressions, and very accurate approximations, for the joint cumulants of filtered shot noise processes with multiplicative noise. These general results are then applied to a model of neuronal membranes subject to conductance shot noise with continuously variable rate of pre-synaptic spikes. We propose very effective approximations for the time evolution of Vm distribution and simple method to estimate the pre-synaptic rate from a small number of Vm traces. This work opens the perspective of obtaining analytic access to important statistical properties of conductance-based neuronal models such as the the first passage time.
[ { "created": "Wed, 22 Oct 2014 07:45:17 GMT", "version": "v1" }, { "created": "Fri, 20 Feb 2015 09:56:46 GMT", "version": "v2" }, { "created": "Sun, 13 Sep 2015 13:48:58 GMT", "version": "v3" } ]
2015-09-15
[ [ "Brigham", "Marco", "" ], [ "Destexhe", "Alain", "" ] ]
Filtered shot noise processes have proven to be very effective in modelling the evolution of systems exposed to stochastic shot noise sources, and have been applied to a wide variety of fields ranging from electronics through biology. In particular, they can model the membrane potential Vm of neurons driven by stochastic input, where these filtered processes are able to capture the non-stationary characteristics of Vm fluctuations in response to pre-synaptic input with variable rate. In this paper, we apply the general framework of Poisson Point Processes transformations to analyse these systems in the general case of variable input rate. We obtain exact analytic expressions, and very accurate approximations, for the joint cumulants of filtered shot noise processes with multiplicative noise. These general results are then applied to a model of neuronal membranes subject to conductance shot noise with continuously variable rate of pre-synaptic spikes. We propose very effective approximations for the time evolution of Vm distribution and simple method to estimate the pre-synaptic rate from a small number of Vm traces. This work opens the perspective of obtaining analytic access to important statistical properties of conductance-based neuronal models such as the the first passage time.
2311.01320
Yuehua Liu
Menghan Zhang and Yuehua Liu
The molecular pathology of genioglossus in obstructive sleep apnea
null
null
null
null
q-bio.MN q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Obstructive sleep apnea (OSA) is a sleep respiratory disease characterized by sleep snoring accompanied by apnea and daytime sleeplessness. It is a complex disease, with the multifactorial etiology, and the pathology is incompletely understood. Genioglossus (GG), the largest dilator of upper airway, whose fatigue is strongly correlated to onset of OSA. This brief review was to investigate the pathogenesis of OSA targeting on GG from different risk factors as gender, obesity, and aging, and the molecular mechanism of GG injury in OSA pathogenesis. We hope to find the targeted molecular mechanism on GG in OSA treatment.
[ { "created": "Thu, 2 Nov 2023 15:33:26 GMT", "version": "v1" } ]
2023-11-03
[ [ "Zhang", "Menghan", "" ], [ "Liu", "Yuehua", "" ] ]
Obstructive sleep apnea (OSA) is a sleep respiratory disease characterized by sleep snoring accompanied by apnea and daytime sleeplessness. It is a complex disease, with the multifactorial etiology, and the pathology is incompletely understood. Genioglossus (GG), the largest dilator of upper airway, whose fatigue is strongly correlated to onset of OSA. This brief review was to investigate the pathogenesis of OSA targeting on GG from different risk factors as gender, obesity, and aging, and the molecular mechanism of GG injury in OSA pathogenesis. We hope to find the targeted molecular mechanism on GG in OSA treatment.
q-bio/0503032
Akira Kinjo
Akira R. Kinjo, Ken Nishikawa
Predicting Secondary Structures, Contact Numbers, and Residue-wise Contact Orders of Native Protein Structure from Amino Acid Sequence by Critical Random Networks
20 pages, 1 figure, 5 tables; minor revision; accepted for publication in BIOPHYSICS
BIOPHYSICS Vol. 1, pp. 67-74 (2005)
10.2142/biophysics.1.67
null
q-bio.BM
null
Prediction of one-dimensional protein structures such as secondary structures and contact numbers is useful for the three-dimensional structure prediction and important for the understanding of sequence-structure relationship. Here we present a new machine-learning method, critical random networks (CRNs), for predicting one-dimensional structures, and apply it, with position-specific scoring matrices, to the prediction of secondary structures (SS), contact numbers (CN), and residue-wise contact orders (RWCO). The present method achieves, on average, $Q_3$ accuracy of 77.8% for SS, correlation coefficients of 0.726 and 0.601 for CN and RWCO, respectively. The accuracy of the SS prediction is comparable to other state-of-the-art methods, and that of the CN prediction is a significant improvement over previous methods. We give a detailed formulation of critical random networks-based prediction scheme, and examine the context-dependence of prediction accuracies. In order to study the nonlinear and multi-body effects, we compare the CRNs-based method with a purely linear method based on position-specific scoring matrices. Although not superior to the CRNs-based method, the surprisingly good accuracy achieved by the linear method highlights the difficulty in extracting structural features of higher order from amino acid sequence beyond that provided by the position-specific scoring matrices.
[ { "created": "Tue, 22 Mar 2005 05:48:19 GMT", "version": "v1" }, { "created": "Sat, 23 Jul 2005 05:20:02 GMT", "version": "v2" }, { "created": "Thu, 20 Oct 2005 08:02:43 GMT", "version": "v3" } ]
2007-05-23
[ [ "Kinjo", "Akira R.", "" ], [ "Nishikawa", "Ken", "" ] ]
Prediction of one-dimensional protein structures such as secondary structures and contact numbers is useful for the three-dimensional structure prediction and important for the understanding of sequence-structure relationship. Here we present a new machine-learning method, critical random networks (CRNs), for predicting one-dimensional structures, and apply it, with position-specific scoring matrices, to the prediction of secondary structures (SS), contact numbers (CN), and residue-wise contact orders (RWCO). The present method achieves, on average, $Q_3$ accuracy of 77.8% for SS, correlation coefficients of 0.726 and 0.601 for CN and RWCO, respectively. The accuracy of the SS prediction is comparable to other state-of-the-art methods, and that of the CN prediction is a significant improvement over previous methods. We give a detailed formulation of critical random networks-based prediction scheme, and examine the context-dependence of prediction accuracies. In order to study the nonlinear and multi-body effects, we compare the CRNs-based method with a purely linear method based on position-specific scoring matrices. Although not superior to the CRNs-based method, the surprisingly good accuracy achieved by the linear method highlights the difficulty in extracting structural features of higher order from amino acid sequence beyond that provided by the position-specific scoring matrices.
2408.03436
Laleh Alisaraie
Luckman Qasim, Laleh Alisaraie
ProS2Vi: a Python Tool for Visualizing Proteins Secondary Structure
null
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by-sa/4.0/
The Protein Secondary Structure Visualizer ProS2Vi is a novel Python-based visualization tool designed to enhance the analysis and accessibility of protein secondary structures calculated and identified by the Dictionary of Secondary Structure of Proteins algorithm. Leveraging robust Python libraries such as Biopython for data handling, Flask, for Graphical User Interface, Jinja2, and wkhtmltopdf for visualization, ProS2Vi offers a modern and intuitive representation for visualization of the DSSP assigned secondary structures to each residue of any proteins amino acid sequence. Significant features of ProS2Vi include customizable icon colors, the number of residues per line, and the ability to export visualizations as scalable PDFs, enhancing both visual appeal and functional versatility through a user-friendly GUI. We have designed ProS2Vi specifically for secure and local operation, which significantly increases security when dealing with novel protein data.
[ { "created": "Tue, 6 Aug 2024 20:22:34 GMT", "version": "v1" } ]
2024-08-08
[ [ "Qasim", "Luckman", "" ], [ "Alisaraie", "Laleh", "" ] ]
The Protein Secondary Structure Visualizer ProS2Vi is a novel Python-based visualization tool designed to enhance the analysis and accessibility of protein secondary structures calculated and identified by the Dictionary of Secondary Structure of Proteins algorithm. Leveraging robust Python libraries such as Biopython for data handling, Flask, for Graphical User Interface, Jinja2, and wkhtmltopdf for visualization, ProS2Vi offers a modern and intuitive representation for visualization of the DSSP assigned secondary structures to each residue of any proteins amino acid sequence. Significant features of ProS2Vi include customizable icon colors, the number of residues per line, and the ability to export visualizations as scalable PDFs, enhancing both visual appeal and functional versatility through a user-friendly GUI. We have designed ProS2Vi specifically for secure and local operation, which significantly increases security when dealing with novel protein data.
2202.07854
Hiro-Sato Niwa
Hiro-Sato Niwa
Broken symmetry of recruitment fluctuations in marine fishes: L\'evy-stable laws and beyond
15 pages, 3 figures. arXiv admin note: text overlap with arXiv:2202.06206
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recruitment is calculated by summing random offspring-numbers entering the population, where the number of summands (i.e. spawning population size) is also a random process. A priori, it is not clear that individual reproductive variability would have a significant impact on aggregate measures for monitoring populations. Usually these variations are averaged out in a large population, and the aggregate output is merely influenced by population-wide environmental disturbances such as climate and fisheries. However, such arguments break down if the distribution of the individual offspring numbers is heavy-tailed. In a world with power-law offspring-number distribution with exponent $1<\alpha<2$, the recruitment distribution has a putative power-law regime in the tail with the same $\alpha$. The question is to what extent individual reproductive variability can have a noticeable impact on the recruitment under environmentally driven population fluctuations. This question is answered by considering the L\'evy-stable fluctuations as embedded in a randomly varying environment. I report fluctuation scaling and asymmetric fluctuations in recruitment of commercially exploited fish stocks throughout the North Atlantic. The linear scaling of recruitment standard deviation with recruitment level implies that the individual reproductive variability is dominated by population fluctuations. The totally asymmetric (skewed to the right) character is a sign of idiosyncratic variation in reproductive success.
[ { "created": "Wed, 16 Feb 2022 04:26:10 GMT", "version": "v1" } ]
2022-02-17
[ [ "Niwa", "Hiro-Sato", "" ] ]
Recruitment is calculated by summing random offspring-numbers entering the population, where the number of summands (i.e. spawning population size) is also a random process. A priori, it is not clear that individual reproductive variability would have a significant impact on aggregate measures for monitoring populations. Usually these variations are averaged out in a large population, and the aggregate output is merely influenced by population-wide environmental disturbances such as climate and fisheries. However, such arguments break down if the distribution of the individual offspring numbers is heavy-tailed. In a world with power-law offspring-number distribution with exponent $1<\alpha<2$, the recruitment distribution has a putative power-law regime in the tail with the same $\alpha$. The question is to what extent individual reproductive variability can have a noticeable impact on the recruitment under environmentally driven population fluctuations. This question is answered by considering the L\'evy-stable fluctuations as embedded in a randomly varying environment. I report fluctuation scaling and asymmetric fluctuations in recruitment of commercially exploited fish stocks throughout the North Atlantic. The linear scaling of recruitment standard deviation with recruitment level implies that the individual reproductive variability is dominated by population fluctuations. The totally asymmetric (skewed to the right) character is a sign of idiosyncratic variation in reproductive success.
1205.2059
Richard A Neher
Richard A. Neher, Marija Vucelja, Marc M\'ezard, Boris I. Shraiman
Emergence of clones in sexual populations
revised version
null
10.1088/1742-5468/2013/01/P01008
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In sexual population, recombination reshuffles genetic variation and produces novel combinations of existing alleles, while selection amplifies the fittest genotypes in the population. If recombination is more rapid than selection, populations consist of a diverse mixture of many genotypes, as is observed in many populations. In the opposite regime, which is realized for example in the facultatively sexual populations that outcross in only a fraction of reproductive cycles, selection can amplify individual genotypes into large clones. Such clones emerge when the fitness advantage of some of the genotypes is large enough that they grow to a significant fraction of the population despite being broken down by recombination. The occurrence of this "clonal condensation" depends, in addition to the outcrossing rate, on the heritability of fitness. Clonal condensation leads to a strong genetic heterogeneity of the population which is not adequately described by traditional population genetics measures, such as Linkage Disequilibrium. Here we point out the similarity between clonal condensation and the freezing transition in the Random Energy Model of spin glasses. Guided by this analogy we explicitly calculate the probability, Y, that two individuals are genetically identical as a function of the key parameters of the model. While Y is the analog of the spin-glass order parameter, it is also closely related to rate of coalescence in population genetics: Two individuals that are part of the same clone have a recent common ancestor.
[ { "created": "Wed, 9 May 2012 18:25:55 GMT", "version": "v1" }, { "created": "Sat, 21 Jul 2012 14:26:48 GMT", "version": "v2" } ]
2015-06-05
[ [ "Neher", "Richard A.", "" ], [ "Vucelja", "Marija", "" ], [ "Mézard", "Marc", "" ], [ "Shraiman", "Boris I.", "" ] ]
In sexual population, recombination reshuffles genetic variation and produces novel combinations of existing alleles, while selection amplifies the fittest genotypes in the population. If recombination is more rapid than selection, populations consist of a diverse mixture of many genotypes, as is observed in many populations. In the opposite regime, which is realized for example in the facultatively sexual populations that outcross in only a fraction of reproductive cycles, selection can amplify individual genotypes into large clones. Such clones emerge when the fitness advantage of some of the genotypes is large enough that they grow to a significant fraction of the population despite being broken down by recombination. The occurrence of this "clonal condensation" depends, in addition to the outcrossing rate, on the heritability of fitness. Clonal condensation leads to a strong genetic heterogeneity of the population which is not adequately described by traditional population genetics measures, such as Linkage Disequilibrium. Here we point out the similarity between clonal condensation and the freezing transition in the Random Energy Model of spin glasses. Guided by this analogy we explicitly calculate the probability, Y, that two individuals are genetically identical as a function of the key parameters of the model. While Y is the analog of the spin-glass order parameter, it is also closely related to rate of coalescence in population genetics: Two individuals that are part of the same clone have a recent common ancestor.
0903.4027
Dustin Cartwright
Dustin A. Cartwright, Siobhan M. Brady, David A. Orlando, Bernd Sturmfels, Philip N. Benfey
Reconstructing Spatiotemporal Gene Expression Data from Partial Observations
19 pages, 4 figures
null
null
null
q-bio.GN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Developmental transcriptional networks in plants and animals operate in both space and time. To understand these transcriptional networks it is essential to obtain whole-genome expression data at high spatiotemporal resolution. Substantial amounts of spatial and temporal microarray expression data previously have been obtained for the Arabidopsis root; however, these two dimensions of data have not been integrated thoroughly. Complicating this integration is the fact that these data are heterogeneous and incomplete, with observed expression levels representing complex spatial or temporal mixtures. Given these partial observations, we present a novel method for reconstructing integrated high resolution spatiotemporal data. Our method is based on a new iterative algorithm for finding approximate roots to systems of bilinear equations.
[ { "created": "Tue, 24 Mar 2009 07:28:41 GMT", "version": "v1" } ]
2009-03-25
[ [ "Cartwright", "Dustin A.", "" ], [ "Brady", "Siobhan M.", "" ], [ "Orlando", "David A.", "" ], [ "Sturmfels", "Bernd", "" ], [ "Benfey", "Philip N.", "" ] ]
Developmental transcriptional networks in plants and animals operate in both space and time. To understand these transcriptional networks it is essential to obtain whole-genome expression data at high spatiotemporal resolution. Substantial amounts of spatial and temporal microarray expression data previously have been obtained for the Arabidopsis root; however, these two dimensions of data have not been integrated thoroughly. Complicating this integration is the fact that these data are heterogeneous and incomplete, with observed expression levels representing complex spatial or temporal mixtures. Given these partial observations, we present a novel method for reconstructing integrated high resolution spatiotemporal data. Our method is based on a new iterative algorithm for finding approximate roots to systems of bilinear equations.
2406.07715
Max Dabagia
Max Dabagia, Daniel Mitropolsky, Christos H. Papadimitriou, Santosh S. Vempala
Coin-Flipping In The Brain: Statistical Learning with Neuronal Assemblies
22 pages, 8 figures
null
null
null
q-bio.NC cs.NE
http://creativecommons.org/licenses/by-nc-sa/4.0/
How intelligence arises from the brain is a central problem in science. A crucial aspect of intelligence is dealing with uncertainty -- developing good predictions about one's environment, and converting these predictions into decisions. The brain itself seems to be noisy at many levels, from chemical processes which drive development and neuronal activity to trial variability of responses to stimuli. One hypothesis is that the noise inherent to the brain's mechanisms is used to sample from a model of the world and generate predictions. To test this hypothesis, we study the emergence of statistical learning in NEMO, a biologically plausible computational model of the brain based on stylized neurons and synapses, plasticity, and inhibition, and giving rise to assemblies -- a group of neurons whose coordinated firing is tantamount to recalling a location, concept, memory, or other primitive item of cognition. We show in theory and simulation that connections between assemblies record statistics, and ambient noise can be harnessed to make probabilistic choices between assemblies. This allows NEMO to create internal models such as Markov chains entirely from the presentation of sequences of stimuli. Our results provide a foundation for biologically plausible probabilistic computation, and add theoretical support to the hypothesis that noise is a useful component of the brain's mechanism for cognition.
[ { "created": "Tue, 11 Jun 2024 20:51:50 GMT", "version": "v1" } ]
2024-06-13
[ [ "Dabagia", "Max", "" ], [ "Mitropolsky", "Daniel", "" ], [ "Papadimitriou", "Christos H.", "" ], [ "Vempala", "Santosh S.", "" ] ]
How intelligence arises from the brain is a central problem in science. A crucial aspect of intelligence is dealing with uncertainty -- developing good predictions about one's environment, and converting these predictions into decisions. The brain itself seems to be noisy at many levels, from chemical processes which drive development and neuronal activity to trial variability of responses to stimuli. One hypothesis is that the noise inherent to the brain's mechanisms is used to sample from a model of the world and generate predictions. To test this hypothesis, we study the emergence of statistical learning in NEMO, a biologically plausible computational model of the brain based on stylized neurons and synapses, plasticity, and inhibition, and giving rise to assemblies -- a group of neurons whose coordinated firing is tantamount to recalling a location, concept, memory, or other primitive item of cognition. We show in theory and simulation that connections between assemblies record statistics, and ambient noise can be harnessed to make probabilistic choices between assemblies. This allows NEMO to create internal models such as Markov chains entirely from the presentation of sequences of stimuli. Our results provide a foundation for biologically plausible probabilistic computation, and add theoretical support to the hypothesis that noise is a useful component of the brain's mechanism for cognition.
q-bio/0604001
Alan McKane
A. J. McKane, J. D. Nagy, T. J. Newman, M. O. Stefanini
Amplified biochemical oscillations in cellular systems
35 pages, 6 figures
null
10.1007/s10955-006-9221-9
null
q-bio.CB cond-mat.stat-mech q-bio.BM
null
We describe a mechanism for pronounced biochemical oscillations, relevant to microscopic systems, such as the intracellular environment. This mechanism operates for reaction schemes which, when modeled using deterministic rate equations, fail to exhibit oscillations for any values of rate constants. The mechanism relies on amplification of the underlying stochasticity of reaction kinetics within a narrow window of frequencies. This amplification allows fluctuations to beat the central limit theorem, having a dominant effect even though the number of molecules in the system is relatively large. The mechanism is quantitatively studied within simple models of self-regulatory gene expression, and glycolytic oscillations.
[ { "created": "Sun, 2 Apr 2006 13:58:16 GMT", "version": "v1" } ]
2009-11-13
[ [ "McKane", "A. J.", "" ], [ "Nagy", "J. D.", "" ], [ "Newman", "T. J.", "" ], [ "Stefanini", "M. O.", "" ] ]
We describe a mechanism for pronounced biochemical oscillations, relevant to microscopic systems, such as the intracellular environment. This mechanism operates for reaction schemes which, when modeled using deterministic rate equations, fail to exhibit oscillations for any values of rate constants. The mechanism relies on amplification of the underlying stochasticity of reaction kinetics within a narrow window of frequencies. This amplification allows fluctuations to beat the central limit theorem, having a dominant effect even though the number of molecules in the system is relatively large. The mechanism is quantitatively studied within simple models of self-regulatory gene expression, and glycolytic oscillations.
2007.02185
John Rhodes
Samaneh Yourdkhani, Elizabeth S. Allman, John A. Rhodes
Parameter identifiability for a profile mixture model of protein evolution
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A Profile Mixture Model is a model of protein evolution, describing sequence data in which sites are assumed to follow many related substitution processes on a single evolutionary tree. The processes depend in part on different amino acid distributions, or profiles, varying over sites in aligned sequences. A fundamental question for any stochastic model, which must be answered positively to justify model-based inference, is whether the parameters are identifiable from the probability distribution they determine. Here we show that a Profile Mixture Model has identifiable parameters under circumstances in which it is likely to be used for empirical analyses. In particular, for a tree relating 9 or more taxa, both the tree topology and all numerical parameters are generically identifiable when the number of profiles is less than 74.
[ { "created": "Sat, 4 Jul 2020 21:09:41 GMT", "version": "v1" } ]
2020-07-07
[ [ "Yourdkhani", "Samaneh", "" ], [ "Allman", "Elizabeth S.", "" ], [ "Rhodes", "John A.", "" ] ]
A Profile Mixture Model is a model of protein evolution, describing sequence data in which sites are assumed to follow many related substitution processes on a single evolutionary tree. The processes depend in part on different amino acid distributions, or profiles, varying over sites in aligned sequences. A fundamental question for any stochastic model, which must be answered positively to justify model-based inference, is whether the parameters are identifiable from the probability distribution they determine. Here we show that a Profile Mixture Model has identifiable parameters under circumstances in which it is likely to be used for empirical analyses. In particular, for a tree relating 9 or more taxa, both the tree topology and all numerical parameters are generically identifiable when the number of profiles is less than 74.
2104.01512
Seyednami Niyakan
Seyednami Niyakan, Ehsan Hajiramezanali, Shahin Boluki, Siamak Zamani Dadaneh, Xiaoning Qian
SimCD: Simultaneous Clustering and Differential expression analysis for single-cell transcriptomic data
null
null
null
null
q-bio.GN stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single-Cell RNA sequencing (scRNA-seq) measurements have facilitated genome-scale transcriptomic profiling of individual cells, with the hope of deconvolving cellular dynamic changes in corresponding cell sub-populations to better understand molecular mechanisms of different development processes. Several scRNA-seq analysis methods have been proposed to first identify cell sub-populations by clustering and then separately perform differential expression analysis to understand gene expression changes. Their corresponding statistical models and inference algorithms are often designed disjointly. We develop a new method -- SimCD -- that explicitly models cell heterogeneity and dynamic differential changes in one unified hierarchical gamma-negative binomial (hGNB) model, allowing simultaneous cell clustering and differential expression analysis for scRNA-seq data. Our method naturally defines cell heterogeneity by dynamic expression changes, which is expected to help achieve better performances on the two tasks compared to the existing methods that perform them separately. In addition, SimCD better models dropout (zero inflation) in scRNA-seq data by both cell- and gene-level factors and obviates the need for sophisticated pre-processing steps such as normalization, thanks to the direct modeling of scRNA-seq count data by the rigorous hGNB model with an efficient Gibbs sampling inference algorithm. Extensive comparisons with the state-of-the-art methods on both simulated and real-world scRNA-seq count data demonstrate the capability of SimCD to discover cell clusters and capture dynamic expression changes. Furthermore, SimCD helps identify several known genes affected by food deprivation in hypothalamic neuron cell subtypes as well as some new potential markers, suggesting the capability of SimCD for bio-marker discovery.
[ { "created": "Sun, 4 Apr 2021 01:06:18 GMT", "version": "v1" } ]
2021-04-06
[ [ "Niyakan", "Seyednami", "" ], [ "Hajiramezanali", "Ehsan", "" ], [ "Boluki", "Shahin", "" ], [ "Dadaneh", "Siamak Zamani", "" ], [ "Qian", "Xiaoning", "" ] ]
Single-Cell RNA sequencing (scRNA-seq) measurements have facilitated genome-scale transcriptomic profiling of individual cells, with the hope of deconvolving cellular dynamic changes in corresponding cell sub-populations to better understand molecular mechanisms of different development processes. Several scRNA-seq analysis methods have been proposed to first identify cell sub-populations by clustering and then separately perform differential expression analysis to understand gene expression changes. Their corresponding statistical models and inference algorithms are often designed disjointly. We develop a new method -- SimCD -- that explicitly models cell heterogeneity and dynamic differential changes in one unified hierarchical gamma-negative binomial (hGNB) model, allowing simultaneous cell clustering and differential expression analysis for scRNA-seq data. Our method naturally defines cell heterogeneity by dynamic expression changes, which is expected to help achieve better performances on the two tasks compared to the existing methods that perform them separately. In addition, SimCD better models dropout (zero inflation) in scRNA-seq data by both cell- and gene-level factors and obviates the need for sophisticated pre-processing steps such as normalization, thanks to the direct modeling of scRNA-seq count data by the rigorous hGNB model with an efficient Gibbs sampling inference algorithm. Extensive comparisons with the state-of-the-art methods on both simulated and real-world scRNA-seq count data demonstrate the capability of SimCD to discover cell clusters and capture dynamic expression changes. Furthermore, SimCD helps identify several known genes affected by food deprivation in hypothalamic neuron cell subtypes as well as some new potential markers, suggesting the capability of SimCD for bio-marker discovery.
0803.0962
Adilson Enio Motter
Adilson E. Motter, Natali Gulbahce, Eivind Almaas, Albert-Laszlo Barabasi
Predicting synthetic rescues in metabolic networks
Supplementary Information is available at the Molecular Systems Biology website: http://www.nature.com/msb/journal/v4/n1/full/msb20081.html
Molecular Systems Biology 4, 168 (2008)
10.1038/msb.2008.1
null
q-bio.MN cond-mat.dis-nn q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An important goal of medical research is to develop methods to recover the loss of cellular function due to mutations and other defects. Many approaches based on gene therapy aim to repair the defective gene or to insert genes with compensatory function. Here, we propose an alternative, network-based strategy that aims to restore biological function by forcing the cell to either bypass the functions affected by the defective gene, or to compensate for the lost function. Focusing on the metabolism of single-cell organisms, we computationally study mutants that lack an essential enzyme, and thus are unable to grow or have a significantly reduced growth rate. We show that several of these mutants can be turned into viable organisms through additional gene deletions that restore their growth rate. In a rather counterintuitive fashion, this is achieved via additional damage to the metabolic network. Using flux balance-based approaches, we identify a number of synthetically viable gene pairs, in which the removal of one enzyme-encoding gene results in a nonviable phenotype, while the deletion of a second enzyme-encoding gene rescues the organism. The systematic network-based identification of compensatory rescue effects may open new avenues for genetic interventions.
[ { "created": "Thu, 6 Mar 2008 20:12:06 GMT", "version": "v1" } ]
2008-03-15
[ [ "Motter", "Adilson E.", "" ], [ "Gulbahce", "Natali", "" ], [ "Almaas", "Eivind", "" ], [ "Barabasi", "Albert-Laszlo", "" ] ]
An important goal of medical research is to develop methods to recover the loss of cellular function due to mutations and other defects. Many approaches based on gene therapy aim to repair the defective gene or to insert genes with compensatory function. Here, we propose an alternative, network-based strategy that aims to restore biological function by forcing the cell to either bypass the functions affected by the defective gene, or to compensate for the lost function. Focusing on the metabolism of single-cell organisms, we computationally study mutants that lack an essential enzyme, and thus are unable to grow or have a significantly reduced growth rate. We show that several of these mutants can be turned into viable organisms through additional gene deletions that restore their growth rate. In a rather counterintuitive fashion, this is achieved via additional damage to the metabolic network. Using flux balance-based approaches, we identify a number of synthetically viable gene pairs, in which the removal of one enzyme-encoding gene results in a nonviable phenotype, while the deletion of a second enzyme-encoding gene rescues the organism. The systematic network-based identification of compensatory rescue effects may open new avenues for genetic interventions.
0802.2271
Vasily Ogryzko V
Vasily Ogryzko
Quantum approach to adaptive mutations. Didactic introduction
29 pages, 16 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A didactic introduction, dated by 1999, to the ideas of the papers arXiv:q-bio/0701050 and arXiv:0704.0034
[ { "created": "Fri, 15 Feb 2008 19:25:34 GMT", "version": "v1" } ]
2008-02-18
[ [ "Ogryzko", "Vasily", "" ] ]
A didactic introduction, dated by 1999, to the ideas of the papers arXiv:q-bio/0701050 and arXiv:0704.0034
1105.3106
Ueli Rutishauser
Ueli Rutishauser, Rodney J. Douglas and Jean-Jacques Slotine
Collective stability of networks of winner-take-all circuits
7 Figures
Neural computation 23(3):735-773, 2011
10.1162/NECO_a_00091
null
q-bio.NC cond-mat.dis-nn cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The neocortex has a remarkably uniform neuronal organization, suggesting that common principles of processing are employed throughout its extent. In particular, the patterns of connectivity observed in the superficial layers of the visual cortex are consistent with the recurrent excitation and inhibitory feedback required for cooperative-competitive circuits such as the soft winner-take-all (WTA). WTA circuits offer interesting computational properties such as selective amplification, signal restoration, and decision making. But, these properties depend on the signal gain derived from positive feedback, and so there is a critical trade-off between providing feedback strong enough to support the sophisticated computations, while maintaining overall circuit stability. We consider the question of how to reason about stability in very large distributed networks of such circuits. We approach this problem by approximating the regular cortical architecture as many interconnected cooperative-competitive modules. We demonstrate that by properly understanding the behavior of this small computational module, one can reason over the stability and convergence of very large networks composed of these modules. We obtain parameter ranges in which the WTA circuit operates in a high-gain regime, is stable, and can be aggregated arbitrarily to form large stable networks. We use nonlinear Contraction Theory to establish conditions for stability in the fully nonlinear case, and verify these solutions using numerical simulations. The derived bounds allow modes of operation in which the WTA network is multi-stable and exhibits state-dependent persistent activities. Our approach is sufficiently general to reason systematically about the stability of any network, biological or technological, composed of networks of small modules that express competition through shared inhibition.
[ { "created": "Mon, 16 May 2011 14:37:15 GMT", "version": "v1" } ]
2011-05-17
[ [ "Rutishauser", "Ueli", "" ], [ "Douglas", "Rodney J.", "" ], [ "Slotine", "Jean-Jacques", "" ] ]
The neocortex has a remarkably uniform neuronal organization, suggesting that common principles of processing are employed throughout its extent. In particular, the patterns of connectivity observed in the superficial layers of the visual cortex are consistent with the recurrent excitation and inhibitory feedback required for cooperative-competitive circuits such as the soft winner-take-all (WTA). WTA circuits offer interesting computational properties such as selective amplification, signal restoration, and decision making. But, these properties depend on the signal gain derived from positive feedback, and so there is a critical trade-off between providing feedback strong enough to support the sophisticated computations, while maintaining overall circuit stability. We consider the question of how to reason about stability in very large distributed networks of such circuits. We approach this problem by approximating the regular cortical architecture as many interconnected cooperative-competitive modules. We demonstrate that by properly understanding the behavior of this small computational module, one can reason over the stability and convergence of very large networks composed of these modules. We obtain parameter ranges in which the WTA circuit operates in a high-gain regime, is stable, and can be aggregated arbitrarily to form large stable networks. We use nonlinear Contraction Theory to establish conditions for stability in the fully nonlinear case, and verify these solutions using numerical simulations. The derived bounds allow modes of operation in which the WTA network is multi-stable and exhibits state-dependent persistent activities. Our approach is sufficiently general to reason systematically about the stability of any network, biological or technological, composed of networks of small modules that express competition through shared inhibition.
q-bio/0702001
Manikandan Narayanan
Manikandan Narayanan, Richard M. Karp
Comparing Protein Interaction Networks via a Graph Match-and-Split Algorithm
15 pages, 4 figures, 6 tables. Supplemental text available at http://www.cs.berkeley.edu/~nmani/mas-supplement.pdf
null
null
null
q-bio.MN
null
We present a method that compares the protein interaction networks of two species to detect functionally similar (conserved) protein modules between them. The method is based on an algorithm we developed to identify matching subgraphs between two graphs. Unlike previous network comparison methods, our algorithm has provable guarantees on correctness and efficiency. Our algorithm framework also admits quite general connectivity and local matching criteria that define when two subgraphs match and constitute a conserved module. We apply our method to pairwise comparisons of the yeast protein network with the human, fruit fly and nematode worm protein networks, using a lenient criterion based on connectedness and matching edges, coupled with a betweenness clustering heuristic. We evaluate the detected conserved modules against reference yeast protein complexes using sensitivity and specificity measures. In these evaluations, our method performs competitively with and sometimes better than two previous network comparison methods. Further under some conditions (proper homolog and species selection), our method performs better than a popular single-species clustering method. Beyond these evaluations, we discuss the biology of a couple of conserved modules detected by our method. We demonstrate the utility of network comparison for transferring annotations from yeast proteins to human ones, and validate the predicted annotations.
[ { "created": "Thu, 1 Feb 2007 09:38:57 GMT", "version": "v1" } ]
2007-05-23
[ [ "Narayanan", "Manikandan", "" ], [ "Karp", "Richard M.", "" ] ]
We present a method that compares the protein interaction networks of two species to detect functionally similar (conserved) protein modules between them. The method is based on an algorithm we developed to identify matching subgraphs between two graphs. Unlike previous network comparison methods, our algorithm has provable guarantees on correctness and efficiency. Our algorithm framework also admits quite general connectivity and local matching criteria that define when two subgraphs match and constitute a conserved module. We apply our method to pairwise comparisons of the yeast protein network with the human, fruit fly and nematode worm protein networks, using a lenient criterion based on connectedness and matching edges, coupled with a betweenness clustering heuristic. We evaluate the detected conserved modules against reference yeast protein complexes using sensitivity and specificity measures. In these evaluations, our method performs competitively with and sometimes better than two previous network comparison methods. Further under some conditions (proper homolog and species selection), our method performs better than a popular single-species clustering method. Beyond these evaluations, we discuss the biology of a couple of conserved modules detected by our method. We demonstrate the utility of network comparison for transferring annotations from yeast proteins to human ones, and validate the predicted annotations.
2403.14046
Stephan Pohl
Stephan Pohl, Edgar Y. Walker, David L. Barack, Jennifer Lee, Rachel N. Denison, Ned Block, Florent Meyniel, Wei Ji Ma
Desiderata of evidence for representation in neuroscience
50 pages, 11 figures
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
This paper develops a systematic framework for the evidence neuroscientists use to establish whether a neural response represents a feature. Researchers try to establish that the neural response is (1) sensitive and (2) specific to the feature, (3) invariant to other features, and (4) functional, which means that it is used downstream in the brain. We formalize these desiderata in information-theoretic terms. This formalism allows us to precisely state the desiderata while unifying the different analysis methods used in neuroscience under one framework. We discuss how common methods such as correlational analyses, decoding and encoding models, representational similarity analysis, and tests of statistical dependence are used to evaluate the desiderata. In doing so, we provide a common terminology to researchers that helps to clarify disagreements, to compare and integrate results across studies and research groups, and to identify when evidence might be missing and when evidence for some representational conclusion is strong. We illustrate the framework with several canonical examples, including the representation of orientation, numerosity, faces, and spatial location. We end by discussing how the framework can be extended to cover models of the neural code, multi-stage models, and other domains.
[ { "created": "Thu, 21 Mar 2024 00:07:02 GMT", "version": "v1" } ]
2024-03-22
[ [ "Pohl", "Stephan", "" ], [ "Walker", "Edgar Y.", "" ], [ "Barack", "David L.", "" ], [ "Lee", "Jennifer", "" ], [ "Denison", "Rachel N.", "" ], [ "Block", "Ned", "" ], [ "Meyniel", "Florent", "" ], [ "Ma", "Wei Ji", "" ] ]
This paper develops a systematic framework for the evidence neuroscientists use to establish whether a neural response represents a feature. Researchers try to establish that the neural response is (1) sensitive and (2) specific to the feature, (3) invariant to other features, and (4) functional, which means that it is used downstream in the brain. We formalize these desiderata in information-theoretic terms. This formalism allows us to precisely state the desiderata while unifying the different analysis methods used in neuroscience under one framework. We discuss how common methods such as correlational analyses, decoding and encoding models, representational similarity analysis, and tests of statistical dependence are used to evaluate the desiderata. In doing so, we provide a common terminology to researchers that helps to clarify disagreements, to compare and integrate results across studies and research groups, and to identify when evidence might be missing and when evidence for some representational conclusion is strong. We illustrate the framework with several canonical examples, including the representation of orientation, numerosity, faces, and spatial location. We end by discussing how the framework can be extended to cover models of the neural code, multi-stage models, and other domains.
1802.03800
Mehmet Tan
Mehmet Tan, Ozan F{\i}rat \"Ozg\"ul, Batuhan Bardak, I\c{s}{\i}ksu Ek\c{s}io\u{g}lu, Suna Sabuncuo\u{g}lu
Drug response prediction by ensemble learning and drug-induced gene expression signatures
Will appear in Genomics Journal
null
10.1016/j.ygeno.2018.07.002
null
q-bio.GN cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chemotherapeutic response of cancer cells to a given compound is one of the most fundamental information one requires to design anti-cancer drugs. Recent advances in producing large drug screens against cancer cell lines provided an opportunity to apply machine learning methods for this purpose. In addition to cytotoxicity databases, considerable amount of drug-induced gene expression data has also become publicly available. Following this, several methods that exploit omics data were proposed to predict drug activity on cancer cells. However, due to the complexity of cancer drug mechanisms, none of the existing methods are perfect. One possible direction, therefore, is to combine the strengths of both the methods and the databases for improved performance. We demonstrate that integrating a large number of predictions by the proposed method improves the performance for this task. The predictors in the ensemble differ in several aspects such as the method itself, the number of tasks method considers (multi-task vs. single-task) and the subset of data considered (sub-sampling). We show that all these different aspects contribute to the success of the final ensemble. In addition, we attempt to use the drug screen data together with two novel signatures produced from the drug-induced gene expression profiles of cancer cell lines. Finally, we evaluate the method predictions by in vitro experiments in addition to the tests on data sets.The predictions of the methods, the signatures and the software are available from \url{http://mtan.etu.edu.tr/drug-response-prediction/}.
[ { "created": "Sun, 11 Feb 2018 19:34:10 GMT", "version": "v1" }, { "created": "Fri, 6 Apr 2018 13:25:44 GMT", "version": "v2" }, { "created": "Mon, 16 Jul 2018 08:36:58 GMT", "version": "v3" } ]
2018-07-17
[ [ "Tan", "Mehmet", "" ], [ "Özgül", "Ozan Fırat", "" ], [ "Bardak", "Batuhan", "" ], [ "Ekşioğlu", "Işıksu", "" ], [ "Sabuncuoğlu", "Suna", "" ] ]
Chemotherapeutic response of cancer cells to a given compound is one of the most fundamental information one requires to design anti-cancer drugs. Recent advances in producing large drug screens against cancer cell lines provided an opportunity to apply machine learning methods for this purpose. In addition to cytotoxicity databases, considerable amount of drug-induced gene expression data has also become publicly available. Following this, several methods that exploit omics data were proposed to predict drug activity on cancer cells. However, due to the complexity of cancer drug mechanisms, none of the existing methods are perfect. One possible direction, therefore, is to combine the strengths of both the methods and the databases for improved performance. We demonstrate that integrating a large number of predictions by the proposed method improves the performance for this task. The predictors in the ensemble differ in several aspects such as the method itself, the number of tasks method considers (multi-task vs. single-task) and the subset of data considered (sub-sampling). We show that all these different aspects contribute to the success of the final ensemble. In addition, we attempt to use the drug screen data together with two novel signatures produced from the drug-induced gene expression profiles of cancer cell lines. Finally, we evaluate the method predictions by in vitro experiments in addition to the tests on data sets.The predictions of the methods, the signatures and the software are available from \url{http://mtan.etu.edu.tr/drug-response-prediction/}.
0908.2022
David Saakian
David B. Saakian, Christof K. Biebricher, Chin-Kun Hu
The intermediate evolution phase in case of truncated selection
8 pages
Physical Review E 79, 041905 (2009)
10.1103/PhysRevE.79.041905
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using methods of statistical physics, we present rigorous theoretical calculations of Eigen's quasispecies theory with the truncated fitness landscape which dramatically limits the available sequence space of a reproducing quasispecies. Depending on the mutation rates, we observe three phases, a selective one, an intermediate one with some residual order and a completely randomized phase. Our results are applicable for the general case of fitness landscape.
[ { "created": "Fri, 14 Aug 2009 08:02:46 GMT", "version": "v1" } ]
2015-05-13
[ [ "Saakian", "David B.", "" ], [ "Biebricher", "Christof K.", "" ], [ "Hu", "Chin-Kun", "" ] ]
Using methods of statistical physics, we present rigorous theoretical calculations of Eigen's quasispecies theory with the truncated fitness landscape which dramatically limits the available sequence space of a reproducing quasispecies. Depending on the mutation rates, we observe three phases, a selective one, an intermediate one with some residual order and a completely randomized phase. Our results are applicable for the general case of fitness landscape.
0804.2055
Francisco J. Cao
M. Bier, F. J. Cao
How occasional backstepping can speed up a processive motor protein
LaTeX, 5 pages, 3 figures
null
10.1016/j.bpj.2008.12.620
null
q-bio.SC cond-mat.stat-mech physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fueled by the hydrolysis of ATP, the motor protein kinesin literally walks on two legs along the biopolymer microtubule. The number of accidental backsteps that kinesin takes appears to be much larger than what one would expect given the amount of free energy that ATP hydrolysis makes available. This is puzzling as more than a billion years of natural selection should have optimized the motor protein for its speed and efficiency. But more backstepping allows for the production of more entropy. Such entropy production will make free energy available. With this additional free energy, the catalytic cycle of the kinesin can be speeded up. We show how measured backstep percentages represent an optimum at which maximal net forward speed is achieved.
[ { "created": "Sun, 13 Apr 2008 10:46:12 GMT", "version": "v1" } ]
2009-11-13
[ [ "Bier", "M.", "" ], [ "Cao", "F. J.", "" ] ]
Fueled by the hydrolysis of ATP, the motor protein kinesin literally walks on two legs along the biopolymer microtubule. The number of accidental backsteps that kinesin takes appears to be much larger than what one would expect given the amount of free energy that ATP hydrolysis makes available. This is puzzling as more than a billion years of natural selection should have optimized the motor protein for its speed and efficiency. But more backstepping allows for the production of more entropy. Such entropy production will make free energy available. With this additional free energy, the catalytic cycle of the kinesin can be speeded up. We show how measured backstep percentages represent an optimum at which maximal net forward speed is achieved.
1402.1845
Michael B\"orsch
Samuel D. Bockenhauer, Thomas M. Duncan, W. E. Moerner, Michael Boersch
The regulatory switch of F1-ATPase studied by single-molecule FRET in the ABEL Trap
14 pages, 5 figures
null
10.1117/12.2042688
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
F1-ATPase is the soluble portion of the membrane-embedded enzyme FoF1-ATP synthase that catalyzes the production of adenosine triphosphate in eukaryotic and eubacterial cells. In reverse, the F1 part can also hydrolyze ATP quickly at three catalytic binding sites. Therefore, catalysis of 'non-productive' ATP hydrolysis by F1 (or FoF1) must be minimized in the cell. In bacteria, the epsilon subunit is thought to control and block ATP hydrolysis by mechanically inserting its C-terminus into the rotary motor region of F1. We investigate this proposed mechanism by labeling F1 specifically with two fluorophores to monitor the C-terminus of the epsilon subunit by F\"orster resonance energy transfer. Single F1 molecules are trapped in solution by an Anti-Brownian electrokinetic trap which keeps the FRET-labeled F1 in place for extended observation times of several hundreds of milliseconds, limited by photobleaching. FRET changes in single F1 and FRET histograms for different biochemical conditions are compared to evaluate the proposed regulatory mechanism.
[ { "created": "Sat, 8 Feb 2014 12:17:08 GMT", "version": "v1" } ]
2015-06-18
[ [ "Bockenhauer", "Samuel D.", "" ], [ "Duncan", "Thomas M.", "" ], [ "Moerner", "W. E.", "" ], [ "Boersch", "Michael", "" ] ]
F1-ATPase is the soluble portion of the membrane-embedded enzyme FoF1-ATP synthase that catalyzes the production of adenosine triphosphate in eukaryotic and eubacterial cells. In reverse, the F1 part can also hydrolyze ATP quickly at three catalytic binding sites. Therefore, catalysis of 'non-productive' ATP hydrolysis by F1 (or FoF1) must be minimized in the cell. In bacteria, the epsilon subunit is thought to control and block ATP hydrolysis by mechanically inserting its C-terminus into the rotary motor region of F1. We investigate this proposed mechanism by labeling F1 specifically with two fluorophores to monitor the C-terminus of the epsilon subunit by F\"orster resonance energy transfer. Single F1 molecules are trapped in solution by an Anti-Brownian electrokinetic trap which keeps the FRET-labeled F1 in place for extended observation times of several hundreds of milliseconds, limited by photobleaching. FRET changes in single F1 and FRET histograms for different biochemical conditions are compared to evaluate the proposed regulatory mechanism.
1404.6790
Mikhail Ivanchenko Dr.
O.V. Bolkhovskaya, D.Yu. Zorin, M.V. Ivanchenko
Assessing T cell clonal size distribution: a non-parametric approach
13 pages, 3 figures, 2 tables
PLoS ONE 9(10): e108658
10.1371/journal.pone.0108658
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Clonal structure of the human peripheral T-cell repertoire is shaped by a number of homeostatic mechanisms, including antigen presentation, cytokine and cell regulation. Its accurate tuning leads to a remarkable ability to combat pathogens in all their variety, while systemic failures may lead to severe consequences like autoimmune diseases. Here we develop and make use of a non-parametric statistical approach to assess T cell clonal size distributions from recent next generation sequencing data. For 41 healthy individuals and a patient with ankylosing spondylitis, who undergone treatment, we invariably find power law scaling over several decades and for the first time calculate quantitatively meaningful values of decay exponent. It has proved to be much the same among healthy donors, significantly different for an autoimmune patient before the therapy, and converging towards a typical value afterwards. We discuss implications of the findings for theoretical understanding and mathematical modeling of adaptive immunity.
[ { "created": "Sun, 27 Apr 2014 17:07:17 GMT", "version": "v1" }, { "created": "Thu, 21 Aug 2014 18:05:15 GMT", "version": "v2" } ]
2014-10-07
[ [ "Bolkhovskaya", "O. V.", "" ], [ "Zorin", "D. Yu.", "" ], [ "Ivanchenko", "M. V.", "" ] ]
Clonal structure of the human peripheral T-cell repertoire is shaped by a number of homeostatic mechanisms, including antigen presentation, cytokine and cell regulation. Its accurate tuning leads to a remarkable ability to combat pathogens in all their variety, while systemic failures may lead to severe consequences like autoimmune diseases. Here we develop and make use of a non-parametric statistical approach to assess T cell clonal size distributions from recent next generation sequencing data. For 41 healthy individuals and a patient with ankylosing spondylitis, who undergone treatment, we invariably find power law scaling over several decades and for the first time calculate quantitatively meaningful values of decay exponent. It has proved to be much the same among healthy donors, significantly different for an autoimmune patient before the therapy, and converging towards a typical value afterwards. We discuss implications of the findings for theoretical understanding and mathematical modeling of adaptive immunity.