id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1203.6465
Viktor Wixler
Sergey V. Chesnokov, Lina G. Chesnokov and Viktor Wixler
Phenomenon of irreducible genetic markers for TATAAA motifs in human chromosome 1
17 pages including 1 Figure and 6 Tables
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is well known that the general transcription factors (GTF) specifically recognize correct TATA boxes, distinguishing them from many others. Employing the principles of determinacy analysis (mathematical theory of rules) we analyzed a fragment of human chromosome 1 DNA sequence and identified specific genetic markers (IG-markers = Irreducible Genetic markers) in the nearest proximity to TATAAA motifs. The IG-markers enable determining the exact location of any TATAAA motif within the investigated DNA fragment. Based on our data we hypothesize that the GTF recognize the {\guillemotleft}true{\guillemotright} transcriptional start TATA box by means of IG-markers. The math method described here is universal and can be used to find IG-markers that will provide, like a global navigation satellite system, for the specific location of any distinct sequence motif within larger DNA sequence content.
[ { "created": "Thu, 29 Mar 2012 08:21:44 GMT", "version": "v1" } ]
2012-03-30
[ [ "Chesnokov", "Sergey V.", "" ], [ "Chesnokov", "Lina G.", "" ], [ "Wixler", "Viktor", "" ] ]
It is well known that the general transcription factors (GTF) specifically recognize correct TATA boxes, distinguishing them from many others. Employing the principles of determinacy analysis (mathematical theory of rules) we analyzed a fragment of human chromosome 1 DNA sequence and identified specific genetic markers (IG-markers = Irreducible Genetic markers) in the nearest proximity to TATAAA motifs. The IG-markers enable determining the exact location of any TATAAA motif within the investigated DNA fragment. Based on our data we hypothesize that the GTF recognize the {\guillemotleft}true{\guillemotright} transcriptional start TATA box by means of IG-markers. The math method described here is universal and can be used to find IG-markers that will provide, like a global navigation satellite system, for the specific location of any distinct sequence motif within larger DNA sequence content.
1609.04649
Joana Grah
Joana Sarah Grah, Jennifer Alison Harrington, Siang Boon Koh, Jeremy Andrew Pike, Alexander Schreiner, Martin Burger, Carola-Bibiane Sch\"onlieb, Stefanie Reichelt
Mathematical Imaging Methods for Mitosis Analysis in Live-Cell Phase Contrast Microscopy
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we propose a workflow to detect and track mitotic cells in time-lapse microscopy image sequences. In order to avoid the requirement for cell lines expressing fluorescent markers and the associated phototoxicity, phase contrast microscopy is often preferred over fluorescence microscopy in live-cell imaging. However, common specific image characteristics complicate image processing and impede use of standard methods. Nevertheless, automated analysis is desirable due to manual analysis being subjective, biased and extremely time-consuming for large data sets. Here, we present the following workflow based on mathematical imaging methods. In the first step, mitosis detection is performed by means of the circular Hough transform. The obtained circular contour subsequently serves as an initialisation for the tracking algorithm based on variational methods. It is sub-divided into two parts: in order to determine the beginning of the whole mitosis cycle, a backwards tracking procedure is performed. After that, the cell is tracked forwards in time until the end of mitosis. As a result, the average of mitosis duration and ratios of different cell fates (cell death, no division, division into two or more daughter cells) can be measured and statistics on cell morphologies can be obtained. All of the tools are featured in the user-friendly MATLAB$^{\circledR}$ Graphical User Interface MitosisAnalyser.
[ { "created": "Wed, 14 Sep 2016 16:38:04 GMT", "version": "v1" }, { "created": "Thu, 22 Sep 2016 10:10:55 GMT", "version": "v2" }, { "created": "Fri, 10 Feb 2017 11:50:01 GMT", "version": "v3" } ]
2017-02-13
[ [ "Grah", "Joana Sarah", "" ], [ "Harrington", "Jennifer Alison", "" ], [ "Koh", "Siang Boon", "" ], [ "Pike", "Jeremy Andrew", "" ], [ "Schreiner", "Alexander", "" ], [ "Burger", "Martin", "" ], [ "Schönlieb", "Carola-Bibiane", "" ], [ "Reichelt", "Stefanie", "" ] ]
In this paper we propose a workflow to detect and track mitotic cells in time-lapse microscopy image sequences. In order to avoid the requirement for cell lines expressing fluorescent markers and the associated phototoxicity, phase contrast microscopy is often preferred over fluorescence microscopy in live-cell imaging. However, common specific image characteristics complicate image processing and impede use of standard methods. Nevertheless, automated analysis is desirable due to manual analysis being subjective, biased and extremely time-consuming for large data sets. Here, we present the following workflow based on mathematical imaging methods. In the first step, mitosis detection is performed by means of the circular Hough transform. The obtained circular contour subsequently serves as an initialisation for the tracking algorithm based on variational methods. It is sub-divided into two parts: in order to determine the beginning of the whole mitosis cycle, a backwards tracking procedure is performed. After that, the cell is tracked forwards in time until the end of mitosis. As a result, the average of mitosis duration and ratios of different cell fates (cell death, no division, division into two or more daughter cells) can be measured and statistics on cell morphologies can be obtained. All of the tools are featured in the user-friendly MATLAB$^{\circledR}$ Graphical User Interface MitosisAnalyser.
1801.09589
Chendi Wang
Chendi Wang, Rafeef Abugharbieh
Coactivated Clique Based Multisource Overlapping Brain Subnetwork Extraction
18 pages, 5 figures
null
null
null
q-bio.NC cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Subnetwork extraction using community detection methods is commonly used to study the brain's modular structure. Recent studies indicated that certain brain regions are known to interact with multiple subnetworks. However, most existing methods are mainly for non-overlapping subnetwork extraction. In this paper, we present an approach for overlapping brain subnetwork extraction using cliques, which we defined as co-activated node groups performing multiple tasks. We proposed a multisource subnetwork extraction approach based on the co-activated clique, which (1) uses task co-activation and task connectivity strength information for clique identification, (2) automatically detects cliques of different sizes having more neuroscientific justifications, and (3) shares the subnetwork membership, derived from a fusion of rest and task data, among the nodes within a clique for overlapping subnetwork extraction. On real data, compared to the commonly used overlapping community detection techniques, we showed that our approach improved subnetwork extraction in terms of group-level and subject-wise reproducibility. We also showed that our multisource approach identified subnetwork overlaps within brain regions that matched well with hubs defined using functional and anatomical information, which enables us to study the interactions between the subnetworks and how hubs play their role in information flow across different subnetworks. We further demonstrated that the assignments of interacting/individual nodes using our approach correspond with the posterior probability derived independently from our multimodal random walker based approach.
[ { "created": "Fri, 26 Jan 2018 18:56:23 GMT", "version": "v1" } ]
2018-01-30
[ [ "Wang", "Chendi", "" ], [ "Abugharbieh", "Rafeef", "" ] ]
Subnetwork extraction using community detection methods is commonly used to study the brain's modular structure. Recent studies indicated that certain brain regions are known to interact with multiple subnetworks. However, most existing methods are mainly for non-overlapping subnetwork extraction. In this paper, we present an approach for overlapping brain subnetwork extraction using cliques, which we defined as co-activated node groups performing multiple tasks. We proposed a multisource subnetwork extraction approach based on the co-activated clique, which (1) uses task co-activation and task connectivity strength information for clique identification, (2) automatically detects cliques of different sizes having more neuroscientific justifications, and (3) shares the subnetwork membership, derived from a fusion of rest and task data, among the nodes within a clique for overlapping subnetwork extraction. On real data, compared to the commonly used overlapping community detection techniques, we showed that our approach improved subnetwork extraction in terms of group-level and subject-wise reproducibility. We also showed that our multisource approach identified subnetwork overlaps within brain regions that matched well with hubs defined using functional and anatomical information, which enables us to study the interactions between the subnetworks and how hubs play their role in information flow across different subnetworks. We further demonstrated that the assignments of interacting/individual nodes using our approach correspond with the posterior probability derived independently from our multimodal random walker based approach.
2009.08378
Timo C. Wunderlich
Timo C. Wunderlich, Christian Pehle
Event-Based Backpropagation can compute Exact Gradients for Spiking Neural Networks
null
null
10.1038/s41598-021-91786-z
null
q-bio.NC cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spiking neural networks combine analog computation with event-based communication using discrete spikes. While the impressive advances of deep learning are enabled by training non-spiking artificial neural networks using the backpropagation algorithm, applying this algorithm to spiking networks was previously hindered by the existence of discrete spike events and discontinuities. For the first time, this work derives the backpropagation algorithm for a continuous-time spiking neural network and a general loss function by applying the adjoint method together with the proper partial derivative jumps, allowing for backpropagation through discrete spike events without approximations. This algorithm, EventProp, backpropagates errors at spike times in order to compute the exact gradient in an event-based, temporally and spatially sparse fashion. We use gradients computed via EventProp to train networks on the Yin-Yang and MNIST datasets using either a spike time or voltage based loss function and report competitive performance. Our work supports the rigorous study of gradient-based learning algorithms in spiking neural networks and provides insights toward their implementation in novel brain-inspired hardware.
[ { "created": "Thu, 17 Sep 2020 15:45:00 GMT", "version": "v1" }, { "created": "Mon, 21 Sep 2020 15:59:39 GMT", "version": "v2" }, { "created": "Mon, 31 May 2021 18:00:07 GMT", "version": "v3" } ]
2021-06-22
[ [ "Wunderlich", "Timo C.", "" ], [ "Pehle", "Christian", "" ] ]
Spiking neural networks combine analog computation with event-based communication using discrete spikes. While the impressive advances of deep learning are enabled by training non-spiking artificial neural networks using the backpropagation algorithm, applying this algorithm to spiking networks was previously hindered by the existence of discrete spike events and discontinuities. For the first time, this work derives the backpropagation algorithm for a continuous-time spiking neural network and a general loss function by applying the adjoint method together with the proper partial derivative jumps, allowing for backpropagation through discrete spike events without approximations. This algorithm, EventProp, backpropagates errors at spike times in order to compute the exact gradient in an event-based, temporally and spatially sparse fashion. We use gradients computed via EventProp to train networks on the Yin-Yang and MNIST datasets using either a spike time or voltage based loss function and report competitive performance. Our work supports the rigorous study of gradient-based learning algorithms in spiking neural networks and provides insights toward their implementation in novel brain-inspired hardware.
2304.02198
Bowen Jing
Bowen Jing, Ezra Erives, Peter Pao-Huang, Gabriele Corso, Bonnie Berger, Tommi Jaakkola
EigenFold: Generative Protein Structure Prediction with Diffusion Models
ICLR MLDD workshop 2023
null
null
null
q-bio.BM cs.LG physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein structure prediction has reached revolutionary levels of accuracy on single structures, yet distributional modeling paradigms are needed to capture the conformational ensembles and flexibility that underlie biological function. Towards this goal, we develop EigenFold, a diffusion generative modeling framework for sampling a distribution of structures from a given protein sequence. We define a diffusion process that models the structure as a system of harmonic oscillators and which naturally induces a cascading-resolution generative process along the eigenmodes of the system. On recent CAMEO targets, EigenFold achieves a median TMScore of 0.84, while providing a more comprehensive picture of model uncertainty via the ensemble of sampled structures relative to existing methods. We then assess EigenFold's ability to model and predict conformational heterogeneity for fold-switching proteins and ligand-induced conformational change. Code is available at https://github.com/bjing2016/EigenFold.
[ { "created": "Wed, 5 Apr 2023 02:46:13 GMT", "version": "v1" } ]
2023-04-06
[ [ "Jing", "Bowen", "" ], [ "Erives", "Ezra", "" ], [ "Pao-Huang", "Peter", "" ], [ "Corso", "Gabriele", "" ], [ "Berger", "Bonnie", "" ], [ "Jaakkola", "Tommi", "" ] ]
Protein structure prediction has reached revolutionary levels of accuracy on single structures, yet distributional modeling paradigms are needed to capture the conformational ensembles and flexibility that underlie biological function. Towards this goal, we develop EigenFold, a diffusion generative modeling framework for sampling a distribution of structures from a given protein sequence. We define a diffusion process that models the structure as a system of harmonic oscillators and which naturally induces a cascading-resolution generative process along the eigenmodes of the system. On recent CAMEO targets, EigenFold achieves a median TMScore of 0.84, while providing a more comprehensive picture of model uncertainty via the ensemble of sampled structures relative to existing methods. We then assess EigenFold's ability to model and predict conformational heterogeneity for fold-switching proteins and ligand-induced conformational change. Code is available at https://github.com/bjing2016/EigenFold.
2208.04275
Maxwell J. D. Ramstead
Maxwell J. D Ramstead
One person's modus ponens...: Comment on "The Markov blanket trick: On the scope of the free energy principle and active inference" by Raja and colleagues (2021)
null
null
10.1016/j.plrev.2022.11.001
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this comment on "The Markov blanket trick: On the scope of the free energy principle and active inference" by Raja and colleagues (2021) in Physics of Life Reviews, I argue that the argument presented by the authors is valid; however, I claim that the argument contains a flawed premise, which undermines their conclusions. In addition, I argue that work on the FEP that has appeared since the target paper was published underwrites a cogent response to the issues that are raised by Raja and colleagues.
[ { "created": "Mon, 8 Aug 2022 17:14:47 GMT", "version": "v1" } ]
2022-11-30
[ [ "Ramstead", "Maxwell J. D", "" ] ]
In this comment on "The Markov blanket trick: On the scope of the free energy principle and active inference" by Raja and colleagues (2021) in Physics of Life Reviews, I argue that the argument presented by the authors is valid; however, I claim that the argument contains a flawed premise, which undermines their conclusions. In addition, I argue that work on the FEP that has appeared since the target paper was published underwrites a cogent response to the issues that are raised by Raja and colleagues.
1902.02463
Mike Steel Prof.
Kristina Wicke and Mike Steel
Combinatorial properties of phylogenetic diversity indices
31 pages, 7 figures
null
null
null
q-bio.PE math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phylogenetic diversity indices provide a formal way to apportion 'evolutionary heritage' across species. Two natural diversity indices are Fair Proportion (FP) and Equal Splits (ES). FP is also called 'evolutionary distinctiveness' and, for rooted trees, is identical to the Shapley Value (SV), which arises from cooperative game theory. In this paper, we investigate the extent to which FP and ES can differ, characterise tree shapes on which the indices are identical, and study the equivalence of FP and SV and its implications in more detail. We also define and investigate analogues of these indices on unrooted trees (where SV was originally defined), including an index that is closely related to the Pauplin representation of phylogenetic diversity.
[ { "created": "Thu, 7 Feb 2019 03:45:34 GMT", "version": "v1" }, { "created": "Tue, 16 Jul 2019 08:56:21 GMT", "version": "v2" }, { "created": "Wed, 2 Oct 2019 20:39:58 GMT", "version": "v3" } ]
2019-10-04
[ [ "Wicke", "Kristina", "" ], [ "Steel", "Mike", "" ] ]
Phylogenetic diversity indices provide a formal way to apportion 'evolutionary heritage' across species. Two natural diversity indices are Fair Proportion (FP) and Equal Splits (ES). FP is also called 'evolutionary distinctiveness' and, for rooted trees, is identical to the Shapley Value (SV), which arises from cooperative game theory. In this paper, we investigate the extent to which FP and ES can differ, characterise tree shapes on which the indices are identical, and study the equivalence of FP and SV and its implications in more detail. We also define and investigate analogues of these indices on unrooted trees (where SV was originally defined), including an index that is closely related to the Pauplin representation of phylogenetic diversity.
1106.3035
Chuan Xue
Chuan Xue and Elena O. Budrene and Hans G. Othmer
Radial and spiral stream formation in Proteus mirabilis
null
null
10.1371/journal.pcbi.1002332
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The enteric bacterium Proteus mirabilis, which is a pathogen that forms biofilms in vivo, can swarm over hard surfaces and form concentric ring patterns in colonies. Colony formation involves two distinct cell types: swarmer cells that dominate near the surface and the leading edge, and swimmer cells that prefer a less viscous medium, but the mechanisms underlying pattern formation are not understood. New experimental investigations reported here show that swimmer cells in the center of the colony stream inward toward the inoculation site and in the process form many complex patterns, including radial and spiral streams, in addition to concentric rings. These new observations suggest that swimmers are motile and that indirect interactions between them are essential in the pattern formation. To explain these observations we develop a hybrid cell-based model that incorporates a chemotactic response of swimmers to a chemical they produce. The model predicts that formation of radial streams can be explained as the modulation of the local attractant concentration by the cells, and that the chirality of the spiral streams can be predicted by incorporating a swimming bias of the cells near the surface of the substrate. The spatial patterns generated from the model are in qualitative agreement with the experimental observations.
[ { "created": "Wed, 15 Jun 2011 17:41:32 GMT", "version": "v1" }, { "created": "Fri, 17 Jun 2011 21:45:55 GMT", "version": "v2" } ]
2015-05-28
[ [ "Xue", "Chuan", "" ], [ "Budrene", "Elena O.", "" ], [ "Othmer", "Hans G.", "" ] ]
The enteric bacterium Proteus mirabilis, which is a pathogen that forms biofilms in vivo, can swarm over hard surfaces and form concentric ring patterns in colonies. Colony formation involves two distinct cell types: swarmer cells that dominate near the surface and the leading edge, and swimmer cells that prefer a less viscous medium, but the mechanisms underlying pattern formation are not understood. New experimental investigations reported here show that swimmer cells in the center of the colony stream inward toward the inoculation site and in the process form many complex patterns, including radial and spiral streams, in addition to concentric rings. These new observations suggest that swimmers are motile and that indirect interactions between them are essential in the pattern formation. To explain these observations we develop a hybrid cell-based model that incorporates a chemotactic response of swimmers to a chemical they produce. The model predicts that formation of radial streams can be explained as the modulation of the local attractant concentration by the cells, and that the chirality of the spiral streams can be predicted by incorporating a swimming bias of the cells near the surface of the substrate. The spatial patterns generated from the model are in qualitative agreement with the experimental observations.
1510.08729
Gabriel Silva
Gabriel A. Silva
The prevalence of small world networks explained by modeling the competing dynamics of local signaling events in geometric networks
Updated version of the paper
null
null
null
q-bio.MN math.DS physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Networks are ubiquitous throughout science and engineering. A number of methods, including some from our own group, have explored how one goes about computing or predicting the dynamics of networks given information about internal models of individual nodes and network connectivity, possibly with additional information provided by statistical or descriptive metrics that characterize the network. But what can be inferred about network dynamics when there is no knowledge or information about the internal model or dynamics of participating nodes? Here, we explore how connected subsets of nodes competitively interact in order to activate a common downstream node they connect into. We achieve this by assuming a simple set of rules borrowed from neurophysiology. The model we develop reflects a local process from which global network dynamics emerges. We call this model a competitive refractory dynanics model. It is derived from a consideration of spatial and temporal summation in biological neurons, whereby summating post synaptic potentials (PSPs) along the dendritic tree contribute towards the membrane potential at the initial segment reaching a threshold potential. We first show how the 'winning node' or set of 'winning' nodes that achieve activation of a downstream node is computable by the model. We then derive a formal definition of optimized network signaling within our framework. We define a ratio between the signaling latencies on the edges of the network and the internal time it takes individual nodes to process incoming signals. We show that an optimal ratio is one where the speed of information propagation between connected nodes does not exceed the internal dynamic time scale of the nodes. We then show how we can use these results to arrive at a unique interpretation for the prevalence of the small world network topology in natural and engineered systems.
[ { "created": "Fri, 9 Oct 2015 06:50:08 GMT", "version": "v1" }, { "created": "Thu, 3 Mar 2016 18:11:27 GMT", "version": "v2" }, { "created": "Thu, 12 May 2016 23:20:21 GMT", "version": "v3" }, { "created": "Fri, 12 May 2017 03:22:27 GMT", "version": "v4" }, { "created": "Fri, 3 Nov 2017 03:09:12 GMT", "version": "v5" } ]
2017-11-06
[ [ "Silva", "Gabriel A.", "" ] ]
Networks are ubiquitous throughout science and engineering. A number of methods, including some from our own group, have explored how one goes about computing or predicting the dynamics of networks given information about internal models of individual nodes and network connectivity, possibly with additional information provided by statistical or descriptive metrics that characterize the network. But what can be inferred about network dynamics when there is no knowledge or information about the internal model or dynamics of participating nodes? Here, we explore how connected subsets of nodes competitively interact in order to activate a common downstream node they connect into. We achieve this by assuming a simple set of rules borrowed from neurophysiology. The model we develop reflects a local process from which global network dynamics emerges. We call this model a competitive refractory dynanics model. It is derived from a consideration of spatial and temporal summation in biological neurons, whereby summating post synaptic potentials (PSPs) along the dendritic tree contribute towards the membrane potential at the initial segment reaching a threshold potential. We first show how the 'winning node' or set of 'winning' nodes that achieve activation of a downstream node is computable by the model. We then derive a formal definition of optimized network signaling within our framework. We define a ratio between the signaling latencies on the edges of the network and the internal time it takes individual nodes to process incoming signals. We show that an optimal ratio is one where the speed of information propagation between connected nodes does not exceed the internal dynamic time scale of the nodes. We then show how we can use these results to arrive at a unique interpretation for the prevalence of the small world network topology in natural and engineered systems.
0708.3599
Siebe van Albada
Siebe B. van Albada and Pieter Rein ten Wolde
Enzyme localization can drastically affect signal amplification in signal transduction pathways
PLoS Comp Biol, in press. 32 pages including 6 figures and supporting information
null
10.1371/journal.pcbi.0030195.eor
null
q-bio.MN
null
Push-pull networks are ubiquitous in signal transduction pathways in both prokaryotic and eukaryotic cells. They allow cells to strongly amplify signals via the mechanism of zero-order ultrasensitivity. In a push-pull network, two antagonistic enzymes control the activity of a protein by covalent modification. These enzymes are often uniformly distributed in the cytoplasm. They can, however, also be colocalized in space, for instance, near the pole of the cell. Moreover, it is increasingly recognized that these enzymes can also be spatially separated, leading to gradients of the active form of the messenger protein. Here, we investigate the consequences of the spatial distributions of the enzymes for the amplification properties of push-pull networks. Our calculations reveal that enzyme localization by itself can have a dramatic effect on the gain. The gain is maximized when the two enzymes are either uniformly distributed or colocalized in one region in the cell. Depending on the diffusion constants, however, the sharpness of the response can be strongly reduced when the enzymes are spatially separated. We discuss how our predictions could be tested experimentally.
[ { "created": "Mon, 27 Aug 2007 14:40:17 GMT", "version": "v1" } ]
2007-08-28
[ [ "van Albada", "Siebe B.", "" ], [ "Wolde", "Pieter Rein ten", "" ] ]
Push-pull networks are ubiquitous in signal transduction pathways in both prokaryotic and eukaryotic cells. They allow cells to strongly amplify signals via the mechanism of zero-order ultrasensitivity. In a push-pull network, two antagonistic enzymes control the activity of a protein by covalent modification. These enzymes are often uniformly distributed in the cytoplasm. They can, however, also be colocalized in space, for instance, near the pole of the cell. Moreover, it is increasingly recognized that these enzymes can also be spatially separated, leading to gradients of the active form of the messenger protein. Here, we investigate the consequences of the spatial distributions of the enzymes for the amplification properties of push-pull networks. Our calculations reveal that enzyme localization by itself can have a dramatic effect on the gain. The gain is maximized when the two enzymes are either uniformly distributed or colocalized in one region in the cell. Depending on the diffusion constants, however, the sharpness of the response can be strongly reduced when the enzymes are spatially separated. We discuss how our predictions could be tested experimentally.
1507.00368
Piotr S{\l}owi\'nski
Piotr S{\l}owi\'nski, Chao Zhai, Francesco Alderisio, Robin Salesse, Mathieu Gueugnon, Ludovic Marin, Benoit G. Bardy, Mario di Bernardo, and Krasimira Tsaneva-Atanasova
Dynamic similarity promotes interpersonal coordination in joint-action
null
J. R. Soc. Interface 2016, 13, 20151093
10.1098/rsif.2015.1093
null
q-bio.NC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human movement has been studied for decades and dynamic laws of motion that are common to all humans have been derived. Yet, every individual moves differently from everyone else (faster/slower, harder/smoother etc). We propose here an index of such variability, namely an individual motor signature (IMS) able to capture the subtle differences in the way each of us moves. We show that the IMS of a person is time-invariant and that it significantly differs from those of other individuals. This allows us to quantify the dynamic similarity, a measure of rapport between dynamics of different individuals' movements, and demonstrate that it facilitates coordination during interaction. We use our measure to confirm a key prediction of the theory of similarity that coordination between two individuals performing a joint-action task is higher if their motions share similar dynamic features. Furthermore, we use a virtual avatar driven by an interactive cognitive architecture based on feedback control theory to explore the effects of different kinematic features of the avatar motion on the coordination with human players.
[ { "created": "Wed, 1 Jul 2015 20:41:07 GMT", "version": "v1" }, { "created": "Tue, 22 Dec 2015 08:16:07 GMT", "version": "v2" } ]
2016-03-24
[ [ "Słowiński", "Piotr", "" ], [ "Zhai", "Chao", "" ], [ "Alderisio", "Francesco", "" ], [ "Salesse", "Robin", "" ], [ "Gueugnon", "Mathieu", "" ], [ "Marin", "Ludovic", "" ], [ "Bardy", "Benoit G.", "" ], [ "di Bernardo", "Mario", "" ], [ "Tsaneva-Atanasova", "Krasimira", "" ] ]
Human movement has been studied for decades and dynamic laws of motion that are common to all humans have been derived. Yet, every individual moves differently from everyone else (faster/slower, harder/smoother etc). We propose here an index of such variability, namely an individual motor signature (IMS) able to capture the subtle differences in the way each of us moves. We show that the IMS of a person is time-invariant and that it significantly differs from those of other individuals. This allows us to quantify the dynamic similarity, a measure of rapport between dynamics of different individuals' movements, and demonstrate that it facilitates coordination during interaction. We use our measure to confirm a key prediction of the theory of similarity that coordination between two individuals performing a joint-action task is higher if their motions share similar dynamic features. Furthermore, we use a virtual avatar driven by an interactive cognitive architecture based on feedback control theory to explore the effects of different kinematic features of the avatar motion on the coordination with human players.
2303.12651
Joshua Kaste
Joshua A.M. Kaste and Yair Shachar-Hill
Model Validation and Selection in Metabolic Flux Analysis and Flux Balance Analysis
23 pages, 2 figures, 1 table
null
null
null
q-bio.MN q-bio.QM
http://creativecommons.org/licenses/by/4.0/
13C-Metabolic Flux Analysis (13C-MFA) and Flux Balance Analysis (FBA) are widely used to investigate the operation of biochemical networks in both biological and biotechnological research. Both of these methods use metabolic reaction network models of metabolism operating at steady state, so that reaction rates (fluxes) and the levels of metabolic intermediates are constrained to be invariant. They provide estimated (MFA) or predicted (FBA) values of the fluxes through the network in vivo, which cannot be measured directly. A number of approaches have been taken to test the reliability of estimates and predictions from constraint-based methods and to decide on and/or discriminate between alternative model architectures. Despite advances in other areas of the statistical evaluation of metabolic models, validation and model selection methods have been underappreciated and underexplored. We review the history and state-of-the-art in constraint-based metabolic model validation and model selection. Applications and limitations of the X2-test of goodness-of-fit, the most widely used quantitative validation and selection approach in 13C-MFA, are discussed, and complementary and alternative forms of validation and selection are proposed. A combined model validation and selection framework for 13C-MFA incorporating metabolite pool size information that leverages new developments in the field is presented and advocated for. Finally, we discuss how the adoption of robust validation and selection procedures can enhance confidence in constraint-based modeling as a whole and ultimately facilitate more widespread use of FBA in biotechnology in particular.
[ { "created": "Wed, 22 Mar 2023 15:32:01 GMT", "version": "v1" } ]
2023-03-23
[ [ "Kaste", "Joshua A. M.", "" ], [ "Shachar-Hill", "Yair", "" ] ]
13C-Metabolic Flux Analysis (13C-MFA) and Flux Balance Analysis (FBA) are widely used to investigate the operation of biochemical networks in both biological and biotechnological research. Both of these methods use metabolic reaction network models of metabolism operating at steady state, so that reaction rates (fluxes) and the levels of metabolic intermediates are constrained to be invariant. They provide estimated (MFA) or predicted (FBA) values of the fluxes through the network in vivo, which cannot be measured directly. A number of approaches have been taken to test the reliability of estimates and predictions from constraint-based methods and to decide on and/or discriminate between alternative model architectures. Despite advances in other areas of the statistical evaluation of metabolic models, validation and model selection methods have been underappreciated and underexplored. We review the history and state-of-the-art in constraint-based metabolic model validation and model selection. Applications and limitations of the X2-test of goodness-of-fit, the most widely used quantitative validation and selection approach in 13C-MFA, are discussed, and complementary and alternative forms of validation and selection are proposed. A combined model validation and selection framework for 13C-MFA incorporating metabolite pool size information that leverages new developments in the field is presented and advocated for. Finally, we discuss how the adoption of robust validation and selection procedures can enhance confidence in constraint-based modeling as a whole and ultimately facilitate more widespread use of FBA in biotechnology in particular.
1503.08527
Alexey Shipunov
Brandon Chrisman, Allison Rabe, Ranelle Ivens, Sarah Lopez, and Alexey Shipunov
The ecological impact of flooding: a study of tree damage
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/publicdomain/
The objective of this research was to identify factors affecting tree damage in the historical Minot flood of 2011. We hypothesized that tree height, identity, origin, and maximum water height affect in the severity of damage sustained by a tree in a flood event. All these factors were significant but highly interactive. The results from this research can influence planting practices in valleys and other flood prone areas to mitigate future damage.
[ { "created": "Mon, 30 Mar 2015 03:21:01 GMT", "version": "v1" } ]
2015-03-31
[ [ "Chrisman", "Brandon", "" ], [ "Rabe", "Allison", "" ], [ "Ivens", "Ranelle", "" ], [ "Lopez", "Sarah", "" ], [ "Shipunov", "Alexey", "" ] ]
The objective of this research was to identify factors affecting tree damage in the historical Minot flood of 2011. We hypothesized that tree height, identity, origin, and maximum water height affect in the severity of damage sustained by a tree in a flood event. All these factors were significant but highly interactive. The results from this research can influence planting practices in valleys and other flood prone areas to mitigate future damage.
1702.00360
Thomas Ouldridge
Thomas E. Ouldridge
The importance of thermodynamics for molecular systems, and the importance of molecular systems for thermodynamics
To appear in Nat. Comput. Special issue for DNA22
null
null
null
q-bio.MN cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Improved understanding of molecular systems has only emphasised the sophistication of networks within the cell. Simultaneously, the advance of nucleic acid nanotechnology, a platform within which reactions can be exquisitely controlled, has made the development of artificial architectures and devices possible. Vital to this progress has been a solid foundation in the thermodynamics of molecular systems. In this pedagogical review and perspective, I will discuss how thermodynamics determines both the overall potential of molecular networks, and the minute details of design. I will then argue that, in turn, the need to understand molecular systems is helping to drive the development of theories of thermodynamics at the microscopic scale.
[ { "created": "Wed, 1 Feb 2017 17:14:50 GMT", "version": "v1" }, { "created": "Mon, 2 Oct 2017 17:02:05 GMT", "version": "v2" } ]
2017-10-03
[ [ "Ouldridge", "Thomas E.", "" ] ]
Improved understanding of molecular systems has only emphasised the sophistication of networks within the cell. Simultaneously, the advance of nucleic acid nanotechnology, a platform within which reactions can be exquisitely controlled, has made the development of artificial architectures and devices possible. Vital to this progress has been a solid foundation in the thermodynamics of molecular systems. In this pedagogical review and perspective, I will discuss how thermodynamics determines both the overall potential of molecular networks, and the minute details of design. I will then argue that, in turn, the need to understand molecular systems is helping to drive the development of theories of thermodynamics at the microscopic scale.
2402.00207
Lucas Machado Moschen
Lucas Machado Moschen, Mar\'ia Soledad Aronna
Optimal vaccination strategies on networks and in metropolitan areas
29 pages, 23 figures
null
null
null
q-bio.PE math.OC q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
This study presents a mathematical model for optimal vaccination strategies in interconnected metropolitan areas, considering commuting patterns. It is a compartmental model with a vaccination rate for each city, acting as a control function. The commuting patterns are incorporated through a weighted adjacency matrix and a parameter that selects day and night periods. The optimal control problem is formulated to minimize a functional cost that balances the number of hospitalizations and vaccines, including restrictions of a weekly availability cap and an application capacity of vaccines per unit of time. The key findings of this work are bounds for the basic reproduction number, particularly in the case of a metropolitan area, and the study of the optimal control problem. Theoretical analysis and numerical simulations provide insights into disease dynamics and the effectiveness of control measures. The research highlights the importance of prioritizing vaccination in the capital to better control the disease spread, as we depicted in our numerical simulations. This model serves as a tool to improve resource allocation in epidemic control across metropolitan regions.
[ { "created": "Wed, 31 Jan 2024 22:09:22 GMT", "version": "v1" }, { "created": "Fri, 26 Apr 2024 15:03:50 GMT", "version": "v2" } ]
2024-04-29
[ [ "Moschen", "Lucas Machado", "" ], [ "Aronna", "María Soledad", "" ] ]
This study presents a mathematical model for optimal vaccination strategies in interconnected metropolitan areas, considering commuting patterns. It is a compartmental model with a vaccination rate for each city, acting as a control function. The commuting patterns are incorporated through a weighted adjacency matrix and a parameter that selects day and night periods. The optimal control problem is formulated to minimize a functional cost that balances the number of hospitalizations and vaccines, including restrictions of a weekly availability cap and an application capacity of vaccines per unit of time. The key findings of this work are bounds for the basic reproduction number, particularly in the case of a metropolitan area, and the study of the optimal control problem. Theoretical analysis and numerical simulations provide insights into disease dynamics and the effectiveness of control measures. The research highlights the importance of prioritizing vaccination in the capital to better control the disease spread, as we depicted in our numerical simulations. This model serves as a tool to improve resource allocation in epidemic control across metropolitan regions.
1002.1023
Michael B\"orsch
Michael Boersch
Targeting cytochrome C oxidase in mitochondria with Pt(II)-porphyrins for Photodynamic Therapy
11 pages, 5 figures
null
10.1117/12.841284
null
q-bio.BM physics.bio-ph q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mitochondria are the power house of living cells, where the synthesis of the chemical "energy currency" adenosine triphosphate (ATP) occurs. Oxidative phosphorylation by a series of membrane protein complexes I to IV, that is, the electron transport chain, is the source of the electrochemical potential difference or proton motive force (PMF) of protons across the inner mitochondrial membrane. The PMF is required for ATP production by complex V of the electron transport chain, i.e. by FoF1-ATP synthase. Destroying cytochrome C oxidase (COX; complex IV) in Photodynamic Therapy (PDT) is achieved by the cationic photosensitizer Pt(II)-TMPyP. Electron microscopy revealed the disruption of the mitochondrial christae as a primary step of PDT. Time resolved phosphorescence measurements identified COX as the binding site for Pt(II)-TMPyP in living HeLa cells. As this photosensitizer competed with cytochrome C in binding to COX, destruction of COX might not only disturb ATP synthesis but could expedite the release of cytochrome C to the cytosol inducing apoptosis.
[ { "created": "Thu, 4 Feb 2010 15:43:01 GMT", "version": "v1" } ]
2015-05-18
[ [ "Boersch", "Michael", "" ] ]
Mitochondria are the power house of living cells, where the synthesis of the chemical "energy currency" adenosine triphosphate (ATP) occurs. Oxidative phosphorylation by a series of membrane protein complexes I to IV, that is, the electron transport chain, is the source of the electrochemical potential difference or proton motive force (PMF) of protons across the inner mitochondrial membrane. The PMF is required for ATP production by complex V of the electron transport chain, i.e. by FoF1-ATP synthase. Destroying cytochrome C oxidase (COX; complex IV) in Photodynamic Therapy (PDT) is achieved by the cationic photosensitizer Pt(II)-TMPyP. Electron microscopy revealed the disruption of the mitochondrial christae as a primary step of PDT. Time resolved phosphorescence measurements identified COX as the binding site for Pt(II)-TMPyP in living HeLa cells. As this photosensitizer competed with cytochrome C in binding to COX, destruction of COX might not only disturb ATP synthesis but could expedite the release of cytochrome C to the cytosol inducing apoptosis.
1801.01823
Ulisse Ferrari
Ulisse Ferrari, Stephane Deny, Matthew Chalk, Gasper Tkacik, Olivier Marre, Thierry Mora
Separating intrinsic interactions from extrinsic correlations in a network of sensory neurons
null
Phys. Rev. E 98, 042410 (2018)
10.1103/PhysRevE.98.042410
null
q-bio.NC cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Correlations in sensory neural networks have both extrinsic and intrinsic origins. Extrinsic or stimulus correlations arise from shared inputs to the network, and thus depend strongly on the stimulus ensemble. Intrinsic or noise correlations reflect biophysical mechanisms of interactions between neurons, which are expected to be robust to changes of the stimulus ensemble. Despite the importance of this distinction for understanding how sensory networks encode information collectively, no method exists to reliably separate intrinsic interactions from extrinsic correlations in neural activity data, limiting our ability to build predictive models of the network response. In this paper we introduce a general strategy to infer {population models of interacting neurons that collectively encode stimulus information}. The key to disentangling intrinsic from extrinsic correlations is to infer the {couplings between neurons} separately from the encoding model, and to combine the two using corrections calculated in a mean-field approximation. We demonstrate the effectiveness of this approach on retinal recordings. The same coupling network is inferred from responses to radically different stimulus ensembles, showing that these couplings indeed reflect stimulus-independent interactions between neurons. The inferred model predicts accurately the collective response of retinal ganglion cell populations as a function of the stimulus.
[ { "created": "Fri, 5 Jan 2018 16:36:56 GMT", "version": "v1" }, { "created": "Thu, 22 Feb 2018 15:54:44 GMT", "version": "v2" } ]
2018-11-05
[ [ "Ferrari", "Ulisse", "" ], [ "Deny", "Stephane", "" ], [ "Chalk", "Matthew", "" ], [ "Tkacik", "Gasper", "" ], [ "Marre", "Olivier", "" ], [ "Mora", "Thierry", "" ] ]
Correlations in sensory neural networks have both extrinsic and intrinsic origins. Extrinsic or stimulus correlations arise from shared inputs to the network, and thus depend strongly on the stimulus ensemble. Intrinsic or noise correlations reflect biophysical mechanisms of interactions between neurons, which are expected to be robust to changes of the stimulus ensemble. Despite the importance of this distinction for understanding how sensory networks encode information collectively, no method exists to reliably separate intrinsic interactions from extrinsic correlations in neural activity data, limiting our ability to build predictive models of the network response. In this paper we introduce a general strategy to infer {population models of interacting neurons that collectively encode stimulus information}. The key to disentangling intrinsic from extrinsic correlations is to infer the {couplings between neurons} separately from the encoding model, and to combine the two using corrections calculated in a mean-field approximation. We demonstrate the effectiveness of this approach on retinal recordings. The same coupling network is inferred from responses to radically different stimulus ensembles, showing that these couplings indeed reflect stimulus-independent interactions between neurons. The inferred model predicts accurately the collective response of retinal ganglion cell populations as a function of the stimulus.
2211.05658
Mo Wang
Mo Wang, Kexin Lou, Zeming Liu, Pengfei Wei, Quanying Liu
Multi-objective optimization via evolutionary algorithm (MOVEA) for high-definition transcranial electrical stimulation of the human brain
null
NeuroImage, Volume 280, 2020
10.1016/j.neuroimage.2023.120348
null
q-bio.QM cs.NE q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Designing a transcranial electrical stimulation (TES) strategy requires considering multiple objectives, such as intensity in the target area, focality, stimulation depth, and avoidance zone, which are often mutually exclusive. A computational framework for optimizing different strategies and comparing trade-offs between these objectives is currently lacking. In this paper, we propose a general framework called multi-objective optimization via evolutionary algorithms (MOVEA) to address the non-convex optimization problem in designing TES strategies without predefined direction. MOVEA enables simultaneous optimization of multiple targets through Pareto optimization, generating a Pareto front after a single run without manual weight adjustment and allowing easy expansion to more targets. This Pareto front consists of optimal solutions that meet various requirements while respecting trade-off relationships between conflicting objectives such as intensity and focality. MOVEA is versatile and suitable for both transcranial alternating current stimulation (tACS) and transcranial temporal interference stimulation (tTIS) based on high definition (HD) and two-pair systems. We performed a comprehensive comparison between tACS and tTIS in terms of intensity, focality, and steerability for targets at different depths.MOVEA facilitates the optimization of TES based on specific objectives and constraints, advancing tTIS and tACS-based neuromodulation in understanding the causal relationship between brain regions and cognitive functions and in treating diseases. The code for MOVEA is available at https://github.com/ncclabsustech/MOVEA.
[ { "created": "Thu, 10 Nov 2022 15:42:06 GMT", "version": "v1" }, { "created": "Mon, 3 Apr 2023 19:55:07 GMT", "version": "v2" } ]
2023-09-13
[ [ "Wang", "Mo", "" ], [ "Lou", "Kexin", "" ], [ "Liu", "Zeming", "" ], [ "Wei", "Pengfei", "" ], [ "Liu", "Quanying", "" ] ]
Designing a transcranial electrical stimulation (TES) strategy requires considering multiple objectives, such as intensity in the target area, focality, stimulation depth, and avoidance zone, which are often mutually exclusive. A computational framework for optimizing different strategies and comparing trade-offs between these objectives is currently lacking. In this paper, we propose a general framework called multi-objective optimization via evolutionary algorithms (MOVEA) to address the non-convex optimization problem in designing TES strategies without predefined direction. MOVEA enables simultaneous optimization of multiple targets through Pareto optimization, generating a Pareto front after a single run without manual weight adjustment and allowing easy expansion to more targets. This Pareto front consists of optimal solutions that meet various requirements while respecting trade-off relationships between conflicting objectives such as intensity and focality. MOVEA is versatile and suitable for both transcranial alternating current stimulation (tACS) and transcranial temporal interference stimulation (tTIS) based on high definition (HD) and two-pair systems. We performed a comprehensive comparison between tACS and tTIS in terms of intensity, focality, and steerability for targets at different depths.MOVEA facilitates the optimization of TES based on specific objectives and constraints, advancing tTIS and tACS-based neuromodulation in understanding the causal relationship between brain regions and cognitive functions and in treating diseases. The code for MOVEA is available at https://github.com/ncclabsustech/MOVEA.
q-bio/0703048
Azi Lipshtat
Azi Lipshtat
An "All Possible Steps" Approach to the Accelerated Use of Gillespie's Algorithm
Accepted for publication at the Journal of Chemical Physics. 19 pages, including 2 Tables and 4 Figures
null
10.1063/1.2730507
null
q-bio.QM physics.comp-ph
null
Many physical and biological processes are stochastic in nature. Computational models and simulations of such processes are a mathematical and computational challenge. The basic stochastic simulation algorithm was published by D. Gillespie about three decades ago [D.T. Gillespie, J. Phys. Chem. {\bf 81}, 2340, (1977)]. Since then, intensive work has been done to make the algorithm more efficient in terms of running time. All accelerated versions of the algorithm are aimed at minimizing the running time required to produce a stochastic trajectory in state space. In these simulations, a necessary condition for reliable statistics is averaging over a large number of simulations. In this study I present a new accelerating approach which does not alter the stochastic algorithm, but reduces the number of required runs. By analysis of collected data I demonstrate high precision levels with fewer simulations. Moreover, the suggested approach provides a good estimation of statistical error, which may serve as a tool for determining the number of required runs.
[ { "created": "Thu, 22 Mar 2007 12:57:11 GMT", "version": "v1" } ]
2009-11-13
[ [ "Lipshtat", "Azi", "" ] ]
Many physical and biological processes are stochastic in nature. Computational models and simulations of such processes are a mathematical and computational challenge. The basic stochastic simulation algorithm was published by D. Gillespie about three decades ago [D.T. Gillespie, J. Phys. Chem. {\bf 81}, 2340, (1977)]. Since then, intensive work has been done to make the algorithm more efficient in terms of running time. All accelerated versions of the algorithm are aimed at minimizing the running time required to produce a stochastic trajectory in state space. In these simulations, a necessary condition for reliable statistics is averaging over a large number of simulations. In this study I present a new accelerating approach which does not alter the stochastic algorithm, but reduces the number of required runs. By analysis of collected data I demonstrate high precision levels with fewer simulations. Moreover, the suggested approach provides a good estimation of statistical error, which may serve as a tool for determining the number of required runs.
2307.04052
Tinglin Huang
Tinglin Huang, Ziniu Hu, Rex Ying
Learning to Group Auxiliary Datasets for Molecule
Accepted at NeurIPS 2023, Camera Ready Version
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The limited availability of annotations in small molecule datasets presents a challenge to machine learning models. To address this, one common strategy is to collaborate with additional auxiliary datasets. However, having more data does not always guarantee improvements. Negative transfer can occur when the knowledge in the target dataset differs or contradicts that of the auxiliary molecule datasets. In light of this, identifying the auxiliary molecule datasets that can benefit the target dataset when jointly trained remains a critical and unresolved problem. Through an empirical analysis, we observe that combining graph structure similarity and task similarity can serve as a more reliable indicator for identifying high-affinity auxiliary datasets. Motivated by this insight, we propose MolGroup, which separates the dataset affinity into task and structure affinity to predict the potential benefits of each auxiliary molecule dataset. MolGroup achieves this by utilizing a routing mechanism optimized through a bi-level optimization framework. Empowered by the meta gradient, the routing mechanism is optimized toward maximizing the target dataset's performance and quantifies the affinity as the gating score. As a result, MolGroup is capable of predicting the optimal combination of auxiliary datasets for each target dataset. Our extensive experiments demonstrate the efficiency and effectiveness of MolGroup, showing an average improvement of 4.41%/3.47% for GIN/Graphormer trained with the group of molecule datasets selected by MolGroup on 11 target molecule datasets.
[ { "created": "Sat, 8 Jul 2023 22:02:22 GMT", "version": "v1" }, { "created": "Wed, 8 Nov 2023 23:03:35 GMT", "version": "v2" } ]
2023-11-10
[ [ "Huang", "Tinglin", "" ], [ "Hu", "Ziniu", "" ], [ "Ying", "Rex", "" ] ]
The limited availability of annotations in small molecule datasets presents a challenge to machine learning models. To address this, one common strategy is to collaborate with additional auxiliary datasets. However, having more data does not always guarantee improvements. Negative transfer can occur when the knowledge in the target dataset differs or contradicts that of the auxiliary molecule datasets. In light of this, identifying the auxiliary molecule datasets that can benefit the target dataset when jointly trained remains a critical and unresolved problem. Through an empirical analysis, we observe that combining graph structure similarity and task similarity can serve as a more reliable indicator for identifying high-affinity auxiliary datasets. Motivated by this insight, we propose MolGroup, which separates the dataset affinity into task and structure affinity to predict the potential benefits of each auxiliary molecule dataset. MolGroup achieves this by utilizing a routing mechanism optimized through a bi-level optimization framework. Empowered by the meta gradient, the routing mechanism is optimized toward maximizing the target dataset's performance and quantifies the affinity as the gating score. As a result, MolGroup is capable of predicting the optimal combination of auxiliary datasets for each target dataset. Our extensive experiments demonstrate the efficiency and effectiveness of MolGroup, showing an average improvement of 4.41%/3.47% for GIN/Graphormer trained with the group of molecule datasets selected by MolGroup on 11 target molecule datasets.
1309.0936
Namiko Mitarai
Namiko Mitarai and Steen Pedersen
Control of ribosome traffic by position-dependent choice of synonymous codons
12 pages, 6 Figures. This is an author-created, un-copyedited version of an article accepted for publication in Physical Biology. IOP Publishing Ltd is not responsible for any errors or omissions in this version of the manuscript or any version derived from it
Phys. Biol. 10 (2013) 056011
10.1088/1478-3975/10/5/056011
null
q-bio.SC q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Messenger RNA encodes a sequence of amino acids by using codons. For most amino acids there are multiple synonymous codons that can encode the amino acid. The translation speed can vary from one codon to another, thus there is room for changing the ribosome speed while keeping the amino acid sequence and hence the resulting protein. Recently, it has been noticed that the choice of the synonymous codon, via the resulting distribution of slow- and fast-translated codons, affects not only on the average speed of one ribosome translating the messenger RNA (mRNA) but also might have an effect on nearby ribosomes by affecting the appearance of "traffic jams" where multiple ribosomes collide and form queues. To test this "context effect" further, we here investigate the effect of the sequence of synonymous codons on the ribosome traffic by using a ribosome traffic model with codon-dependent rates, estimated from experiments. We compare the ribosome traffic on wild type sequences and sequences where the synonymous codons were swapped randomly. By simulating translation of 87 genes, we demonstrate that the wild type sequences, especially those with a high bias in codon usage, tend to have the ability to reduce ribosome collisions, hence optimizing the cellular investment in the translation apparatus. The magnitude of such reduction of the translation time might have a significant impact on the cellular growth rate and thereby have importance for the survival of the species.
[ { "created": "Wed, 4 Sep 2013 08:05:06 GMT", "version": "v1" } ]
2013-10-10
[ [ "Mitarai", "Namiko", "" ], [ "Pedersen", "Steen", "" ] ]
Messenger RNA encodes a sequence of amino acids by using codons. For most amino acids there are multiple synonymous codons that can encode the amino acid. The translation speed can vary from one codon to another, thus there is room for changing the ribosome speed while keeping the amino acid sequence and hence the resulting protein. Recently, it has been noticed that the choice of the synonymous codon, via the resulting distribution of slow- and fast-translated codons, affects not only on the average speed of one ribosome translating the messenger RNA (mRNA) but also might have an effect on nearby ribosomes by affecting the appearance of "traffic jams" where multiple ribosomes collide and form queues. To test this "context effect" further, we here investigate the effect of the sequence of synonymous codons on the ribosome traffic by using a ribosome traffic model with codon-dependent rates, estimated from experiments. We compare the ribosome traffic on wild type sequences and sequences where the synonymous codons were swapped randomly. By simulating translation of 87 genes, we demonstrate that the wild type sequences, especially those with a high bias in codon usage, tend to have the ability to reduce ribosome collisions, hence optimizing the cellular investment in the translation apparatus. The magnitude of such reduction of the translation time might have a significant impact on the cellular growth rate and thereby have importance for the survival of the species.
2206.12997
Corey Keller
Juha Gogulski, Jessica M. Ross, Austin Talbot, Christopher Cline, Francesco L Donati, Saachi Munot, Naryeong Kim, Ciara Gibbs, Nikita Bastin, Jessica Yang, Christopher B. Minasi, Manjima Sarkar, Jade Truong, Corey J Keller
Personalized rTMS for Depression: A Review
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Personalized treatments are gaining momentum across all fields of medicine. Precision medicine can be applied to neuromodulatory techniques, where focused brain stimulation treatments such as repetitive transcranial magnetic stimulation (rTMS) are used to modulate brain circuits and alleviate clinical symptoms. rTMS is well-tolerated and clinically effective for treatment-resistant depression (TRD) and other neuropsychiatric disorders. However, despite its wide stimulation parameter space (location, angle, pattern, frequency, and intensity can be adjusted), rTMS is currently applied in a one-size-fits-all manner, potentially contributing to its suboptimal clinical response (~50%). In this review, we examine components of rTMS that can be optimized to account for inter-individual variability in neural function and anatomy. We discuss current treatment options for TRD, the neural mechanisms thought to underlie treatment, differences in FDA-cleared devices, targeting strategies, stimulation parameter selection, and adaptive closed-loop rTMS to improve treatment outcomes. We suggest that better understanding of the wide and modifiable parameter space of rTMS will greatly improve clinical outcome.
[ { "created": "Mon, 27 Jun 2022 00:04:07 GMT", "version": "v1" } ]
2022-07-20
[ [ "Gogulski", "Juha", "" ], [ "Ross", "Jessica M.", "" ], [ "Talbot", "Austin", "" ], [ "Cline", "Christopher", "" ], [ "Donati", "Francesco L", "" ], [ "Munot", "Saachi", "" ], [ "Kim", "Naryeong", "" ], [ "Gibbs", "Ciara", "" ], [ "Bastin", "Nikita", "" ], [ "Yang", "Jessica", "" ], [ "Minasi", "Christopher B.", "" ], [ "Sarkar", "Manjima", "" ], [ "Truong", "Jade", "" ], [ "Keller", "Corey J", "" ] ]
Personalized treatments are gaining momentum across all fields of medicine. Precision medicine can be applied to neuromodulatory techniques, where focused brain stimulation treatments such as repetitive transcranial magnetic stimulation (rTMS) are used to modulate brain circuits and alleviate clinical symptoms. rTMS is well-tolerated and clinically effective for treatment-resistant depression (TRD) and other neuropsychiatric disorders. However, despite its wide stimulation parameter space (location, angle, pattern, frequency, and intensity can be adjusted), rTMS is currently applied in a one-size-fits-all manner, potentially contributing to its suboptimal clinical response (~50%). In this review, we examine components of rTMS that can be optimized to account for inter-individual variability in neural function and anatomy. We discuss current treatment options for TRD, the neural mechanisms thought to underlie treatment, differences in FDA-cleared devices, targeting strategies, stimulation parameter selection, and adaptive closed-loop rTMS to improve treatment outcomes. We suggest that better understanding of the wide and modifiable parameter space of rTMS will greatly improve clinical outcome.
2105.06036
Americo Cunha Jr
Eber Dantas, Michel Tosin, Americo Cunha Jr
Calibration of a SEIR-SEI epidemic model to describe the Zika virus outbreak in Brazil
null
Applied Mathematics and Computation, vol. 338, pp. 249-259, 2018
10.1016/j.amc.2018.06.024
null
q-bio.PE cs.NA math.DS math.NA math.OC stat.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiple instances of Zika virus epidemic have been reported around the world in the last two decades, turning the related illness into an international concern. In this context the use of mathematical models for epidemics is of great importance, since they are useful tools to study the underlying outbreak numbers and allow one to test the effectiveness of different strategies used to combat the associated diseases. This work deals with the development and calibration of an epidemic model to describe the 2016 outbreak of Zika virus in Brazil. A system of 8 differential equations with 8 parameters is employed to model the evolution of the infection through two populations. Nominal values for the model parameters are estimated from the literature. An inverse problem is formulated and solved by comparing the system response to real data from the outbreak. The calibrated results presents realistic parameters and returns reasonable descriptions, with the curve shape similar to the outbreak evolution and peak value close to the highest number of infected people during 2016. Considerations about the lack of data for some initial conditions are also made through an analysis over the response behavior according to their change in value.
[ { "created": "Thu, 13 May 2021 01:51:20 GMT", "version": "v1" } ]
2021-05-14
[ [ "Dantas", "Eber", "" ], [ "Tosin", "Michel", "" ], [ "Cunha", "Americo", "Jr" ] ]
Multiple instances of Zika virus epidemic have been reported around the world in the last two decades, turning the related illness into an international concern. In this context the use of mathematical models for epidemics is of great importance, since they are useful tools to study the underlying outbreak numbers and allow one to test the effectiveness of different strategies used to combat the associated diseases. This work deals with the development and calibration of an epidemic model to describe the 2016 outbreak of Zika virus in Brazil. A system of 8 differential equations with 8 parameters is employed to model the evolution of the infection through two populations. Nominal values for the model parameters are estimated from the literature. An inverse problem is formulated and solved by comparing the system response to real data from the outbreak. The calibrated results presents realistic parameters and returns reasonable descriptions, with the curve shape similar to the outbreak evolution and peak value close to the highest number of infected people during 2016. Considerations about the lack of data for some initial conditions are also made through an analysis over the response behavior according to their change in value.
q-bio/0501018
Peter F. Arndt
Peter F. Arndt and Terence Hwa
Identification and Measurement of Neighbor Dependent Nucleotide Substitution Processes
15 pages, 3 figures
null
null
null
q-bio.GN
null
The presence of neighbor dependencies generated a specific pattern of dinucleotide frequencies in all organisms. Especially, the CpG-methylation-deamination process is the predominant substitution process in vertebrates and needs to be incorporated into a more realistic model for nucleotide substitutions. Based on a general framework of nucleotide substitutions we develop a method that is able to identify the most relevant neighbor dependent substitution processes, measure their strength, and judge their importance to be included into the modeling. Starting from a model for neighbor independent nucleotide substitution we successively add neighbor dependent substitution processes in the order of their ability to increase the likelihood of the model describing given data. The analysis of neighbor dependent nucleotide substitutions in human, zebrafish and fruit fly is presented. A web server to perform the presented analysis is publicly available.
[ { "created": "Thu, 13 Jan 2005 12:22:47 GMT", "version": "v1" } ]
2007-05-23
[ [ "Arndt", "Peter F.", "" ], [ "Hwa", "Terence", "" ] ]
The presence of neighbor dependencies generated a specific pattern of dinucleotide frequencies in all organisms. Especially, the CpG-methylation-deamination process is the predominant substitution process in vertebrates and needs to be incorporated into a more realistic model for nucleotide substitutions. Based on a general framework of nucleotide substitutions we develop a method that is able to identify the most relevant neighbor dependent substitution processes, measure their strength, and judge their importance to be included into the modeling. Starting from a model for neighbor independent nucleotide substitution we successively add neighbor dependent substitution processes in the order of their ability to increase the likelihood of the model describing given data. The analysis of neighbor dependent nucleotide substitutions in human, zebrafish and fruit fly is presented. A web server to perform the presented analysis is publicly available.
1902.10168
Leonardo Pellegrina
Leonardo Pellegrina, Cinzia Pizzi, Fabio Vandin
Fast Approximation of Frequent $k$-mers and Applications to Metagenomics
Accepted for RECOMB 2019
null
null
null
q-bio.QM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Estimating the abundances of all $k$-mers in a set of biological sequences is a fundamental and challenging problem with many applications in biological analysis. While several methods have been designed for the exact or approximate solution of this problem, they all require to process the entire dataset, that can be extremely expensive for high-throughput sequencing datasets. While in some applications it is crucial to estimate all $k$-mers and their abundances, in other situations reporting only frequent $k$-mers, that appear with relatively high frequency in a dataset, may suffice. This is the case, for example, in the computation of $k$-mers' abundance-based distances among datasets of reads, commonly used in metagenomic analyses. In this work, we develop, analyze, and test, a sampling-based approach, called SAKEIMA, to approximate the frequent $k$-mers and their frequencies in a high-throughput sequencing dataset while providing rigorous guarantees on the quality of the approximation. SAKEIMA employs an advanced sampling scheme and we show how the characterization of the VC dimension, a core concept from statistical learning theory, of a properly defined set of functions leads to practical bounds on the sample size required for a rigorous approximation. Our experimental evaluation shows that SAKEIMA allows to rigorously approximate frequent $k$-mers by processing only a fraction of a dataset and that the frequencies estimated by SAKEIMA lead to accurate estimates of $k$-mer based distances between high-throughput sequencing datasets. Overall, SAKEIMA is an efficient and rigorous tool to estimate $k$-mers abundances providing significant speed-ups in the analysis of large sequencing datasets.
[ { "created": "Tue, 26 Feb 2019 19:09:05 GMT", "version": "v1" } ]
2019-02-28
[ [ "Pellegrina", "Leonardo", "" ], [ "Pizzi", "Cinzia", "" ], [ "Vandin", "Fabio", "" ] ]
Estimating the abundances of all $k$-mers in a set of biological sequences is a fundamental and challenging problem with many applications in biological analysis. While several methods have been designed for the exact or approximate solution of this problem, they all require to process the entire dataset, that can be extremely expensive for high-throughput sequencing datasets. While in some applications it is crucial to estimate all $k$-mers and their abundances, in other situations reporting only frequent $k$-mers, that appear with relatively high frequency in a dataset, may suffice. This is the case, for example, in the computation of $k$-mers' abundance-based distances among datasets of reads, commonly used in metagenomic analyses. In this work, we develop, analyze, and test, a sampling-based approach, called SAKEIMA, to approximate the frequent $k$-mers and their frequencies in a high-throughput sequencing dataset while providing rigorous guarantees on the quality of the approximation. SAKEIMA employs an advanced sampling scheme and we show how the characterization of the VC dimension, a core concept from statistical learning theory, of a properly defined set of functions leads to practical bounds on the sample size required for a rigorous approximation. Our experimental evaluation shows that SAKEIMA allows to rigorously approximate frequent $k$-mers by processing only a fraction of a dataset and that the frequencies estimated by SAKEIMA lead to accurate estimates of $k$-mer based distances between high-throughput sequencing datasets. Overall, SAKEIMA is an efficient and rigorous tool to estimate $k$-mers abundances providing significant speed-ups in the analysis of large sequencing datasets.
2309.13565
Joydeb Gomasta Mr.
Hasina Sultana, Sharmila Rani Mallick, Jahidul Hassan, Joydeb Gomasta, Md. Humayun Kabir, Md. Sakibul Alam Sakib, Mahmuda Hossen, Muhammad Mustakim Billah, Emrul Kayesh
Nutritional composition and bioactive compounds of mini watermelon genotypes in Bangladesh
22 pages, 6 tables, 3 figures
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
Given the present rising trends in changing lifestyle and consumption patterns, watermelon production has shifted from big to small-sized fruits having desirable quality attributes. Hence, analyses of fruit quality traits of mini watermelon are crucial to develop improved cultivars with enhanced nutritional compositions, consumer-preferred traits and extended storage life. In this context, fruit morphological and nutritional attributes of five mini watermelon genotypes namely BARI watermelon 1 (W1), BARI watermelon 2 (W2), L-32468 (W3), L-32236 (W4) and L-32394 (W5) were evaluated to appraise promising genotypes with better fruit quality. The evaluated genotypes expressed different levels of diversity for fruit physical qualitative traits including differences in shape, rind and flesh color and texture. The study also revealed significant variability among the genotypes regarding all observed fruit morphological and nutritional aspects as well as bioactive compounds. Among the studied genotypes, W1 stood out with the highest TSS as well as rind vitamin C and total phenolic content accompanied by higher fruit weight and thick rind. On the other hand, W3 genotype was featured with higher amount of \b{eta} carotene, total phenolic and flavonoid content in its flesh along with rind enriched with \b{eta} carotene and minerals. However, comparatively higher amount of sugar and total flavonoid content was recorded in the rind of W5 genotype. Therefore, W1 and W3 could be exploited for table purpose and using in breeding program to develop mini watermelon cultivars with more attractive fruits in terms of quality acceptance and nutritional value in Bangladesh. Furthermore, rind of BARI watermelon 1 and L-32394 could be considered as the potential cheap source of bioactive compounds to be used for dietary and industrial purpose which would decrease the solid waste in the environment.
[ { "created": "Sun, 24 Sep 2023 06:43:00 GMT", "version": "v1" } ]
2023-09-26
[ [ "Sultana", "Hasina", "" ], [ "Mallick", "Sharmila Rani", "" ], [ "Hassan", "Jahidul", "" ], [ "Gomasta", "Joydeb", "" ], [ "Kabir", "Md. Humayun", "" ], [ "Sakib", "Md. Sakibul Alam", "" ], [ "Hossen", "Mahmuda", "" ], [ "Billah", "Muhammad Mustakim", "" ], [ "Kayesh", "Emrul", "" ] ]
Given the present rising trends in changing lifestyle and consumption patterns, watermelon production has shifted from big to small-sized fruits having desirable quality attributes. Hence, analyses of fruit quality traits of mini watermelon are crucial to develop improved cultivars with enhanced nutritional compositions, consumer-preferred traits and extended storage life. In this context, fruit morphological and nutritional attributes of five mini watermelon genotypes namely BARI watermelon 1 (W1), BARI watermelon 2 (W2), L-32468 (W3), L-32236 (W4) and L-32394 (W5) were evaluated to appraise promising genotypes with better fruit quality. The evaluated genotypes expressed different levels of diversity for fruit physical qualitative traits including differences in shape, rind and flesh color and texture. The study also revealed significant variability among the genotypes regarding all observed fruit morphological and nutritional aspects as well as bioactive compounds. Among the studied genotypes, W1 stood out with the highest TSS as well as rind vitamin C and total phenolic content accompanied by higher fruit weight and thick rind. On the other hand, W3 genotype was featured with higher amount of \b{eta} carotene, total phenolic and flavonoid content in its flesh along with rind enriched with \b{eta} carotene and minerals. However, comparatively higher amount of sugar and total flavonoid content was recorded in the rind of W5 genotype. Therefore, W1 and W3 could be exploited for table purpose and using in breeding program to develop mini watermelon cultivars with more attractive fruits in terms of quality acceptance and nutritional value in Bangladesh. Furthermore, rind of BARI watermelon 1 and L-32394 could be considered as the potential cheap source of bioactive compounds to be used for dietary and industrial purpose which would decrease the solid waste in the environment.
1403.7104
Brian Williams Dr
Brian Williams
Elimination of HIV in South Africa through expanded access to antiretroviral therapy: Cautions, caveats and the importance of parsimony
Two pages. One figure embedded in text
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a recent article Hontelez and colleagues investigate the prospects for elimination of HIV in South Africa through expanded access to antiretroviral therapy (ART) using STDSIM, a micro-simulation model. One of the first published models to suggest that expanded access to ART could lead to the elimination of HIV, referred to by the authors as the Granich Model, was developed and implemented by the present author. The notion that expanded access to ART could lead to the end of the AIDS epidemic gave rise to considerable interest and debate and remains contentious. In considering this notion Hontelez et al. start by stripping down STDSIM to a simple model that is equivalent to the model developed by the present author3 but is a stochastic event driven model. Hontelez and colleagues then reintroduce levels of complexity to explore ways in which the model structure affects the results. In contrast to our earlier conclusions Hontelez and colleagues conclude that universal voluntary counselling and testing with immediate ART at 90% coverage should result in the elimination of HIV but would take three times longer than predicted by the model developed by the present author. Hontelez et al. suggest that the current scale-up of ART at CD4 cell counts less than 350 cells/microL will lead to elimination of HIV in 30 years. I disagree with both claims and believe that their more complex models rely on unwarranted and unsubstantiated assumptions.
[ { "created": "Wed, 26 Mar 2014 04:45:57 GMT", "version": "v1" } ]
2014-03-28
[ [ "Williams", "Brian", "" ] ]
In a recent article Hontelez and colleagues investigate the prospects for elimination of HIV in South Africa through expanded access to antiretroviral therapy (ART) using STDSIM, a micro-simulation model. One of the first published models to suggest that expanded access to ART could lead to the elimination of HIV, referred to by the authors as the Granich Model, was developed and implemented by the present author. The notion that expanded access to ART could lead to the end of the AIDS epidemic gave rise to considerable interest and debate and remains contentious. In considering this notion Hontelez et al. start by stripping down STDSIM to a simple model that is equivalent to the model developed by the present author3 but is a stochastic event driven model. Hontelez and colleagues then reintroduce levels of complexity to explore ways in which the model structure affects the results. In contrast to our earlier conclusions Hontelez and colleagues conclude that universal voluntary counselling and testing with immediate ART at 90% coverage should result in the elimination of HIV but would take three times longer than predicted by the model developed by the present author. Hontelez et al. suggest that the current scale-up of ART at CD4 cell counts less than 350 cells/microL will lead to elimination of HIV in 30 years. I disagree with both claims and believe that their more complex models rely on unwarranted and unsubstantiated assumptions.
1812.00105
Ehtibar Dzhafarov
Victor H. Cervantes and Ehtibar N. Dzhafarov
True Contextuality in a Psychophysical Experiment
version 2 is a minor revision
null
null
null
q-bio.NC math.PR quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent crowdsourcing experiments have shown that true contextuality of the kind found in quantum mechanics can also be present in human behavior. In these experiments simple human choices were aggregated over large numbers of respondents, with each respondent dealing with a single context (set of questions asked). In this paper we present experimental evidence of contextuality in individual human behavior, in a psychophysical experiment with repeated presentations of visual stimuli in randomly varying contexts (arrangements of stimuli). The analysis is based on the Contextuality-by-Default (CbD) theory whose relevant aspects are reviewed in the paper. CbD allows one to detect contextuality in the presence of direct influences, i.e., when responses to the same stimuli have different distributions in different contexts. The experiment presented is also the first one in which contextuality is demonstrated for responses that are not dichotomous, with five options to choose among. CbD requires that random variables representing such responses be dichotomized before they are subjected to contextuality analysis. A theorem says that a system consisting of all possible dichotomizations of responses has to be contextual if these responses violate a certain condition, called nominal dominance. In our experiment nominal dominance was violated in all data sets, with very high statistical reliability established by bootstrapping.
[ { "created": "Sat, 1 Dec 2018 00:28:10 GMT", "version": "v1" }, { "created": "Fri, 22 Feb 2019 22:15:31 GMT", "version": "v2" } ]
2019-02-26
[ [ "Cervantes", "Victor H.", "" ], [ "Dzhafarov", "Ehtibar N.", "" ] ]
Recent crowdsourcing experiments have shown that true contextuality of the kind found in quantum mechanics can also be present in human behavior. In these experiments simple human choices were aggregated over large numbers of respondents, with each respondent dealing with a single context (set of questions asked). In this paper we present experimental evidence of contextuality in individual human behavior, in a psychophysical experiment with repeated presentations of visual stimuli in randomly varying contexts (arrangements of stimuli). The analysis is based on the Contextuality-by-Default (CbD) theory whose relevant aspects are reviewed in the paper. CbD allows one to detect contextuality in the presence of direct influences, i.e., when responses to the same stimuli have different distributions in different contexts. The experiment presented is also the first one in which contextuality is demonstrated for responses that are not dichotomous, with five options to choose among. CbD requires that random variables representing such responses be dichotomized before they are subjected to contextuality analysis. A theorem says that a system consisting of all possible dichotomizations of responses has to be contextual if these responses violate a certain condition, called nominal dominance. In our experiment nominal dominance was violated in all data sets, with very high statistical reliability established by bootstrapping.
1206.5904
Dipjyoti Das
Dipjyoti Das and Dibyendu Das and Ashok Prasad
Giant number fluctuations in microbial ecologies
18 pages, 5 figures
Journal Theoretical Biology, Vol. 308, pp. 96-104 (2012)
10.1016/j.jtbi.2012.05.030
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Statistical fluctuations in population sizes of microbes may be quite large depending on the nature of their underlying stochastic dynamics. For example, the variance of the population size of a microbe undergoing a pure birth process with unlimited resources is proportional to the square of its mean. We refer to such large fluctuations, with the variance growing as square of the mean, as Giant Number Fluctuations (GNF). Luria and Delbruck showed that spontaneous mutation processes in microbial populations exhibit GNF. We explore whether GNF can arise in other microbial ecologies. We study certain simple ecological models evolving via stochastic processes: (i) bi-directional mutation, (ii) lysis-lysogeny of bacteria by bacteriophage, and (iii) horizontal gene transfer (HGT). For the case of bi-directional mutation process, we show analytically exactly that the GNF relationship holds at large times. For the ecological model of bacteria undergoing lysis or lysogeny under viral infection, we show that if the viral population can be experimentally manipulated to stay quasi-stationary, the process of lysogeny maps essentially to one-way mutation process and hence the GNF property of the lysogens follows. Finally, we show that even the process of HGT may map to the mutation process at large times, and thereby exhibits GNF.
[ { "created": "Tue, 26 Jun 2012 07:46:12 GMT", "version": "v1" } ]
2012-06-27
[ [ "Das", "Dipjyoti", "" ], [ "Das", "Dibyendu", "" ], [ "Prasad", "Ashok", "" ] ]
Statistical fluctuations in population sizes of microbes may be quite large depending on the nature of their underlying stochastic dynamics. For example, the variance of the population size of a microbe undergoing a pure birth process with unlimited resources is proportional to the square of its mean. We refer to such large fluctuations, with the variance growing as square of the mean, as Giant Number Fluctuations (GNF). Luria and Delbruck showed that spontaneous mutation processes in microbial populations exhibit GNF. We explore whether GNF can arise in other microbial ecologies. We study certain simple ecological models evolving via stochastic processes: (i) bi-directional mutation, (ii) lysis-lysogeny of bacteria by bacteriophage, and (iii) horizontal gene transfer (HGT). For the case of bi-directional mutation process, we show analytically exactly that the GNF relationship holds at large times. For the ecological model of bacteria undergoing lysis or lysogeny under viral infection, we show that if the viral population can be experimentally manipulated to stay quasi-stationary, the process of lysogeny maps essentially to one-way mutation process and hence the GNF property of the lysogens follows. Finally, we show that even the process of HGT may map to the mutation process at large times, and thereby exhibits GNF.
1404.5010
Benedict Paten
Benedict Paten, Adam Novak and David Haussler
Mapping to a Reference Genome Structure
25 pages
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To support comparative genomics, population genetics, and medical genetics, we propose that a reference genome should come with a scheme for mapping each base in any DNA string to a position in that reference genome. We refer to a collection of one or more reference genomes and a scheme for mapping to their positions as a reference structure. Here we describe the desirable properties of reference structures and give examples. To account for natural genetic variation, we consider the more general case in which a reference genome is represented by a graph rather than a set of phased chromosomes; the latter is treated as a special case.
[ { "created": "Sun, 20 Apr 2014 04:48:24 GMT", "version": "v1" } ]
2014-04-22
[ [ "Paten", "Benedict", "" ], [ "Novak", "Adam", "" ], [ "Haussler", "David", "" ] ]
To support comparative genomics, population genetics, and medical genetics, we propose that a reference genome should come with a scheme for mapping each base in any DNA string to a position in that reference genome. We refer to a collection of one or more reference genomes and a scheme for mapping to their positions as a reference structure. Here we describe the desirable properties of reference structures and give examples. To account for natural genetic variation, we consider the more general case in which a reference genome is represented by a graph rather than a set of phased chromosomes; the latter is treated as a special case.
2009.04519
Laura Schaposnik
Vishaal Ram, Laura P. Schaposnik, Nikos Konstantinou, Eliz Volkan, Marietta Papadatou-Pastou, Banu Manav, Domicele Jonauskaite, Christine Mohr
Extrapolating continuous color emotions through deep learning
To appear in Physical Review R. (8 pages, 13 figures)
Physical Review RESEARCH 2, 033350 (2020)
10.1103/PhysRevResearch.2.033350
null
q-bio.QM cs.LG physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
By means of an experimental dataset, we use deep learning to implement an RGB extrapolation of emotions associated to color, and do a mathematical study of the results obtained through this neural network. In particular, we see that males typically associate a given emotion with darker colors while females with brighter colors. A similar trend was observed with older people and associations to lighter colors. Moreover, through our classification matrix, we identify which colors have weak associations to emotions and which colors are typically confused with other colors.
[ { "created": "Wed, 5 Aug 2020 02:08:29 GMT", "version": "v1" } ]
2022-10-18
[ [ "Ram", "Vishaal", "" ], [ "Schaposnik", "Laura P.", "" ], [ "Konstantinou", "Nikos", "" ], [ "Volkan", "Eliz", "" ], [ "Papadatou-Pastou", "Marietta", "" ], [ "Manav", "Banu", "" ], [ "Jonauskaite", "Domicele", "" ], [ "Mohr", "Christine", "" ] ]
By means of an experimental dataset, we use deep learning to implement an RGB extrapolation of emotions associated to color, and do a mathematical study of the results obtained through this neural network. In particular, we see that males typically associate a given emotion with darker colors while females with brighter colors. A similar trend was observed with older people and associations to lighter colors. Moreover, through our classification matrix, we identify which colors have weak associations to emotions and which colors are typically confused with other colors.
1808.08662
Luay Nakhleh
R.A.L. Elworth, H.A. Ogilvie, J. Zhu, L. Nakhleh
Advances in Computational Methods for Phylogenetic Networks in the Presence of Hybridization
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Phylogenetic networks extend phylogenetic trees to allow for modeling reticulate evolutionary processes such as hybridization. They take the shape of a rooted, directed, acyclic graph, and when parameterized with evolutionary parameters, such as divergence times and population sizes, they form a generative process of molecular sequence evolution. Early work on computational methods for phylogenetic network inference focused exclusively on reticulations and sought networks with the fewest number of reticulations to fit the data. As processes such as incomplete lineage sorting (ILS) could be at play concurrently with hybridization, work in the last decade has shifted to computational approaches for phylogenetic network inference in the presence of ILS. In such a short period, significant advances have been made on developing and implementing such computational approaches. In particular, parsimony, likelihood, and Bayesian methods have been devised for estimating phylogenetic networks and associated parameters using estimated gene trees as data. Use of those inference methods has been augmented with statistical tests for specific hypotheses of hybridization, like the D-statistic. Most recently, Bayesian approaches for inferring phylogenetic networks directly from sequence data were developed and implemented. In this chapter, we survey such advances and discuss model assumptions as well as methods' strengths and limitations. We also discuss parallel efforts in the population genetics community aimed at inferring similar structures. Finally, we highlight major directions for future research in this area.
[ { "created": "Mon, 27 Aug 2018 01:46:06 GMT", "version": "v1" } ]
2018-08-28
[ [ "Elworth", "R. A. L.", "" ], [ "Ogilvie", "H. A.", "" ], [ "Zhu", "J.", "" ], [ "Nakhleh", "L.", "" ] ]
Phylogenetic networks extend phylogenetic trees to allow for modeling reticulate evolutionary processes such as hybridization. They take the shape of a rooted, directed, acyclic graph, and when parameterized with evolutionary parameters, such as divergence times and population sizes, they form a generative process of molecular sequence evolution. Early work on computational methods for phylogenetic network inference focused exclusively on reticulations and sought networks with the fewest number of reticulations to fit the data. As processes such as incomplete lineage sorting (ILS) could be at play concurrently with hybridization, work in the last decade has shifted to computational approaches for phylogenetic network inference in the presence of ILS. In such a short period, significant advances have been made on developing and implementing such computational approaches. In particular, parsimony, likelihood, and Bayesian methods have been devised for estimating phylogenetic networks and associated parameters using estimated gene trees as data. Use of those inference methods has been augmented with statistical tests for specific hypotheses of hybridization, like the D-statistic. Most recently, Bayesian approaches for inferring phylogenetic networks directly from sequence data were developed and implemented. In this chapter, we survey such advances and discuss model assumptions as well as methods' strengths and limitations. We also discuss parallel efforts in the population genetics community aimed at inferring similar structures. Finally, we highlight major directions for future research in this area.
1401.2897
Andrei Khrennikov Yu
Masanari Asano, Takahisa Hashimoto, Andrei Khrennikov, Masanori Ohya, Yoshiharu Tanaka
Violation of contextual generalization of the Leggett-Garg inequality for recognition of ambiguous figures
Presented at the conference Quantum Interactions 14, University of Leicester, July 2014; submitted to Physica Scripta, IOP; new version contains discussions on Bell and contextuality, marginal selectivity, Kolmogorovization of contextual data
Phys. Scr. T 163 (2014) 014006
10.1088/0031-8949/2014/T163/014006
null
q-bio.NC math.PR math.ST quant-ph stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We interpret the Leggett-Garg (LG) inequality as a kind of contextual probabilistic inequality in which one combines data collected in experiments performed for three different contexts. In the original version of the inequality these contexts have the temporal nature and they are given by three pairs of instances of time, $(t_1, t_2), (t_2, t_3), (t_3, t_4),$ where $t_1 < t_2 < t_3.$ We generalize LG conditions of macroscopic realism and noninvasive measurability in the general contextual framework. Our formulation is done in the purely probabilistic terms: existence of the context independent joint probability distribution $P$ and the possibility to reconstruct the experimentally found marginal (two dimensional) probability distributions from the $P.$ We derive an analog of the LG inequality, "contextual LG inequality", and use it as a test of "quantum-likeness" of statistical data collected in a series of experiments on recognition of ambiguous figures. In our experimental study the figure under recognition is the Schroeder stair which is shown with rotations for different angles. Contexts are encoded by dynamics of rotations: clockwise, anticlockwise, and random. Our data demonstrated violation of the contextual LG inequality for some combinations of aforementioned contexts. Since in quantum theory and experiments with quantum physical systems this inequality is violated, e.g., in the form of the original LG-inequality, our result can be interpreted as a sign that the quantum(-like) models can provide a more adequate description of the data generated in the process of recognition of ambiguous figures.
[ { "created": "Fri, 10 Jan 2014 09:33:13 GMT", "version": "v1" }, { "created": "Wed, 15 Jan 2014 18:16:15 GMT", "version": "v2" }, { "created": "Fri, 2 May 2014 14:01:00 GMT", "version": "v3" } ]
2015-06-18
[ [ "Asano", "Masanari", "" ], [ "Hashimoto", "Takahisa", "" ], [ "Khrennikov", "Andrei", "" ], [ "Ohya", "Masanori", "" ], [ "Tanaka", "Yoshiharu", "" ] ]
We interpret the Leggett-Garg (LG) inequality as a kind of contextual probabilistic inequality in which one combines data collected in experiments performed for three different contexts. In the original version of the inequality these contexts have the temporal nature and they are given by three pairs of instances of time, $(t_1, t_2), (t_2, t_3), (t_3, t_4),$ where $t_1 < t_2 < t_3.$ We generalize LG conditions of macroscopic realism and noninvasive measurability in the general contextual framework. Our formulation is done in the purely probabilistic terms: existence of the context independent joint probability distribution $P$ and the possibility to reconstruct the experimentally found marginal (two dimensional) probability distributions from the $P.$ We derive an analog of the LG inequality, "contextual LG inequality", and use it as a test of "quantum-likeness" of statistical data collected in a series of experiments on recognition of ambiguous figures. In our experimental study the figure under recognition is the Schroeder stair which is shown with rotations for different angles. Contexts are encoded by dynamics of rotations: clockwise, anticlockwise, and random. Our data demonstrated violation of the contextual LG inequality for some combinations of aforementioned contexts. Since in quantum theory and experiments with quantum physical systems this inequality is violated, e.g., in the form of the original LG-inequality, our result can be interpreted as a sign that the quantum(-like) models can provide a more adequate description of the data generated in the process of recognition of ambiguous figures.
1204.2822
Elena Shchekinova Y
E. Shchekinova, M. G. J. L\"oder, M. Boersma, K. H. Wiltshire
The Effect of Differentiation of Prey Community on Stable Coexistence in a Three-Species Food--Web Model
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Food webs with intraguild predation (IGP) are widespread in natural habitats. Their adaptation and resilience behaviour is principal for understanding restructuring of ecological communities. In spite of the importance of IGP food webs their behaviour even for the simplest 3-species systems has not been fully explored. One fundamental question is how an increase of diversity of the lowest trophic level impacts the persistence of higher trophic levels in IGP relationships. We analyze a 3-species food web model with a heterogeneous resources and IGP. The model consists of two predators directly coupled via IGP relation and indirectly via competition for resource. The resource is subdivided into distinct subpopulations. Individuals in the subpopulations are grazed at different rates by the predators. We consider two models: an IGP module with immobilization by the top predator and an IGP module with species turnover. We examine the effect of increasing enrichment and varying immobilization (resource transfer) rate on a stable coexistence of predators and resources. We explore how the predictions from the basic 3-species model are altered when the IGP module is extended to multiple resource subpopulations. We investigate which parameters support a robust coexistence in the IGP system. For the case of multiple subpopulations of the resource we present a numerical comparison of the percentage of food webs with stable coexistence for different dimensionalities of the resource community. At low immobilization (transfer) rates our model predicts a stable 3-species coexistence only for intermediate enrichment meanwhile at high rates a large set of stable equilibrium configurations is found for high enrichment as well.
[ { "created": "Thu, 12 Apr 2012 08:24:21 GMT", "version": "v1" } ]
2012-04-16
[ [ "Shchekinova", "E.", "" ], [ "Löder", "M. G. J.", "" ], [ "Boersma", "M.", "" ], [ "Wiltshire", "K. H.", "" ] ]
Food webs with intraguild predation (IGP) are widespread in natural habitats. Their adaptation and resilience behaviour is principal for understanding restructuring of ecological communities. In spite of the importance of IGP food webs their behaviour even for the simplest 3-species systems has not been fully explored. One fundamental question is how an increase of diversity of the lowest trophic level impacts the persistence of higher trophic levels in IGP relationships. We analyze a 3-species food web model with a heterogeneous resources and IGP. The model consists of two predators directly coupled via IGP relation and indirectly via competition for resource. The resource is subdivided into distinct subpopulations. Individuals in the subpopulations are grazed at different rates by the predators. We consider two models: an IGP module with immobilization by the top predator and an IGP module with species turnover. We examine the effect of increasing enrichment and varying immobilization (resource transfer) rate on a stable coexistence of predators and resources. We explore how the predictions from the basic 3-species model are altered when the IGP module is extended to multiple resource subpopulations. We investigate which parameters support a robust coexistence in the IGP system. For the case of multiple subpopulations of the resource we present a numerical comparison of the percentage of food webs with stable coexistence for different dimensionalities of the resource community. At low immobilization (transfer) rates our model predicts a stable 3-species coexistence only for intermediate enrichment meanwhile at high rates a large set of stable equilibrium configurations is found for high enrichment as well.
1011.0322
Mauro Mobilia
Michael Assaf, Mauro Mobilia
Fixation of a Deleterious Allele under Mutation Pressure and Finite Selection Intensity
26 pages, 5 figures. Accepted by the Journal of Theoretical Biology
J. Theor. Biol. 275, 93-103 (2011)
10.1016/j.jtbi.2011.01.025
null
q-bio.PE cond-mat.stat-mech nlin.AO q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The mean fixation time of a deleterious mutant allele is studied beyond the diffusion approximation. As in Kimura's classical work [M. Kimura, Proc. Natl. Acad. Sci. U.S.A. Vol.77, 522 (1980)], that was motivated by the problem of fixation in the presence of amorphic or hypermorphic mutations, we consider a diallelic model at a single locus comprising a wild-type A and a mutant allele A' produced irreversibly from A at small uniform rate v. The relative fitnesses of the mutant homozygotes A'A', mutant heterozygotes A'A and wild-type homozygotes AA are 1-s, 1-h and 1, respectively, where it is assumed that v<< s. Here, we adopt an approach based on the direct treatment of the underlying Markov chain (birth-death process) obeyed by the allele frequency (whose dynamics is prescribed by the Moran model), which allows to accurately account for the effects of large fluctuations. After a general description of the theory, we focus on the case of a deleterious mutant allele (i.e. s>0) and discuss three situations: when the mutant is (i) completely dominant (s=h); (ii) completely recessive (h=0), and (iii) semi-dominant (h=s/2). Our theoretical predictions for the mean fixation time and the quasi-stationary distribution of the mutant population in the coexistence state, are shown to be in excellent agreement with numerical simulations. Furthermore, when s is finite, we demonstrate that our results are superior to those of the diffusion theory that is shown to be an accurate approximation only when N_e s^2 << 1, where N_e is the effective population size.
[ { "created": "Mon, 1 Nov 2010 14:02:54 GMT", "version": "v1" }, { "created": "Wed, 19 Jan 2011 11:52:53 GMT", "version": "v2" } ]
2011-02-22
[ [ "Assaf", "Michael", "" ], [ "Mobilia", "Mauro", "" ] ]
The mean fixation time of a deleterious mutant allele is studied beyond the diffusion approximation. As in Kimura's classical work [M. Kimura, Proc. Natl. Acad. Sci. U.S.A. Vol.77, 522 (1980)], that was motivated by the problem of fixation in the presence of amorphic or hypermorphic mutations, we consider a diallelic model at a single locus comprising a wild-type A and a mutant allele A' produced irreversibly from A at small uniform rate v. The relative fitnesses of the mutant homozygotes A'A', mutant heterozygotes A'A and wild-type homozygotes AA are 1-s, 1-h and 1, respectively, where it is assumed that v<< s. Here, we adopt an approach based on the direct treatment of the underlying Markov chain (birth-death process) obeyed by the allele frequency (whose dynamics is prescribed by the Moran model), which allows to accurately account for the effects of large fluctuations. After a general description of the theory, we focus on the case of a deleterious mutant allele (i.e. s>0) and discuss three situations: when the mutant is (i) completely dominant (s=h); (ii) completely recessive (h=0), and (iii) semi-dominant (h=s/2). Our theoretical predictions for the mean fixation time and the quasi-stationary distribution of the mutant population in the coexistence state, are shown to be in excellent agreement with numerical simulations. Furthermore, when s is finite, we demonstrate that our results are superior to those of the diffusion theory that is shown to be an accurate approximation only when N_e s^2 << 1, where N_e is the effective population size.
1408.6694
Erik Volz
Erik M. Volz and Simon DW Frost
Sampling through time and phylodynamic inference with coalescent and birth-death models
Submitted
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many population genetic models have been developed for the purpose of inferring population size and growth rates from random samples of genetic data. We examine two popular approaches to this problem, the coalescent and the birth-death-sampling model, in the context of estimating population size and birth rates in a population growing exponentially according to the birth-death branching process. For sequences sampled at a single time, we found the coalescent and the birth-death-sampling model gave virtually indistinguishable results in terms of the growth rates and fraction of the population sampled, even when sampling from a small population. For sequences sampled at multiple time points, we find that the birth-death model estimators are subject to large bias if the sampling process is misspecified. Since birth-death-sampling models incorporate a model of the sampling process, we show how much of the statistical power of birth-death-sampling models arises from the sequence of sample times and not from the genealogical tree. This motivates the development of a new coalescent estimator, which is augmented with a model of the known sampling process and is potentially more precise than the coalescent that does not use sample time information.
[ { "created": "Thu, 28 Aug 2014 12:13:37 GMT", "version": "v1" } ]
2014-08-29
[ [ "Volz", "Erik M.", "" ], [ "Frost", "Simon DW", "" ] ]
Many population genetic models have been developed for the purpose of inferring population size and growth rates from random samples of genetic data. We examine two popular approaches to this problem, the coalescent and the birth-death-sampling model, in the context of estimating population size and birth rates in a population growing exponentially according to the birth-death branching process. For sequences sampled at a single time, we found the coalescent and the birth-death-sampling model gave virtually indistinguishable results in terms of the growth rates and fraction of the population sampled, even when sampling from a small population. For sequences sampled at multiple time points, we find that the birth-death model estimators are subject to large bias if the sampling process is misspecified. Since birth-death-sampling models incorporate a model of the sampling process, we show how much of the statistical power of birth-death-sampling models arises from the sequence of sample times and not from the genealogical tree. This motivates the development of a new coalescent estimator, which is augmented with a model of the known sampling process and is potentially more precise than the coalescent that does not use sample time information.
2006.04684
Ali Salari
Ali Salari, Gregory Kiar, Lindsay Lewis, Alan C. Evans, Tristan Glatard
File-based localization of numerical perturbations in data analysis pipelines
10 pages, 6 figures, 2 tables
null
null
null
q-bio.QM eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data analysis pipelines are known to be impacted by computational conditions, presumably due to the creation and propagation of numerical errors. While this process could play a major role in the current reproducibility crisis, the precise causes of such instabilities and the path along which they propagate in pipelines are unclear. We present Spot, a tool to identify which processes in a pipeline create numerical differences when executed in different computational conditions. Spot leverages system-call interception through ReproZip to reconstruct and compare provenance graphs without pipeline instrumentation. By applying Spot to the structural pre-processing pipelines of the Human Connectome Project, we found that linear and non-linear registration are the cause of most numerical instabilities in these pipelines, which confirms previous findings.
[ { "created": "Wed, 3 Jun 2020 19:11:40 GMT", "version": "v1" }, { "created": "Tue, 29 Sep 2020 01:00:09 GMT", "version": "v2" } ]
2020-09-30
[ [ "Salari", "Ali", "" ], [ "Kiar", "Gregory", "" ], [ "Lewis", "Lindsay", "" ], [ "Evans", "Alan C.", "" ], [ "Glatard", "Tristan", "" ] ]
Data analysis pipelines are known to be impacted by computational conditions, presumably due to the creation and propagation of numerical errors. While this process could play a major role in the current reproducibility crisis, the precise causes of such instabilities and the path along which they propagate in pipelines are unclear. We present Spot, a tool to identify which processes in a pipeline create numerical differences when executed in different computational conditions. Spot leverages system-call interception through ReproZip to reconstruct and compare provenance graphs without pipeline instrumentation. By applying Spot to the structural pre-processing pipelines of the Human Connectome Project, we found that linear and non-linear registration are the cause of most numerical instabilities in these pipelines, which confirms previous findings.
q-bio/0505054
Sagi Snir
Benny Chor, Michael D. Hendy and Sagi Snir
Maximum Likelihood Jukes-Cantor Triplets: Analytic Solutions
null
null
null
null
q-bio.PE
null
Complex systems of polynomial equations have to be set up and solved algebraically in order to obtain analytic solutions for maximum likelihood on phylogenetic trees. This has restricted the types of systems previously resolved to the simplest models - three and four taxa under a molecular clock, with just two state characters. In this work we give, for the first time, analytic solutions for a family of trees with four state characters, like normal DNA or RNA. The model of substitution we use is the Jukes-Cantor model, and the trees are on three taxa under molecular clock, namely rooted triplets. We employ a number of approaches and tools to solve this system: Spectral methods (Hadamard conjugation), a new representation of variables (the path-set spectrum), and algebraic geometry tools (the resultant of two polynomials). All these, combined with heavy application of computer algebra packages (Maple), let us derive the desired solution.
[ { "created": "Fri, 27 May 2005 18:09:56 GMT", "version": "v1" } ]
2007-05-23
[ [ "Chor", "Benny", "" ], [ "Hendy", "Michael D.", "" ], [ "Snir", "Sagi", "" ] ]
Complex systems of polynomial equations have to be set up and solved algebraically in order to obtain analytic solutions for maximum likelihood on phylogenetic trees. This has restricted the types of systems previously resolved to the simplest models - three and four taxa under a molecular clock, with just two state characters. In this work we give, for the first time, analytic solutions for a family of trees with four state characters, like normal DNA or RNA. The model of substitution we use is the Jukes-Cantor model, and the trees are on three taxa under molecular clock, namely rooted triplets. We employ a number of approaches and tools to solve this system: Spectral methods (Hadamard conjugation), a new representation of variables (the path-set spectrum), and algebraic geometry tools (the resultant of two polynomials). All these, combined with heavy application of computer algebra packages (Maple), let us derive the desired solution.
1604.08921
Artur Fassoni
Artur C. Fassoni and Hyun M. Yang
An ecological resilience perspective on cancer: insights from a toy model
null
null
null
null
q-bio.PE math.DS q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we propose an ecological resilience point of view on cancer. This view is based on the analysis of a simple ODE model for the interactions between cancer and normal cells. The model presents two regimes for tumor growth. In the first, cancer arises due to three reasons: a partial corruption of the functions that avoid the growth of mutated cells, an aggressive phenotype of tumor cells and exposure to external carcinogenic factors. In this case, treatments may be effective if they drive the system to the basin of attraction of the cancer cure state. In the second regime, cancer arises because the repair system is intrinsically corrupted. In this case, the complete cure is not possible since the cancer cure state is no more stable, but tumor recurrence may be delayed if treatment is prolongued. We review three indicators of the resilience of a stable equilibrium, related with size and shape of its basin of attraction: latitude, precariousness and resistance. A novel method to calculate these indicators is proposed. This method is simpler and more efficient than those currently used, and may be easily applied to other population dynamics models. We apply this method to the model and investigate how these indicators behave with parameters changes. Finally, we present some simulations to illustrate how the resilience analysis can be applied to validated models in order to obtain indicators for personalized cancer treatments. Keywords: Tumor growth; Chemotherapy; Basins of Attraction; Regime shifts; Critical transitions
[ { "created": "Fri, 29 Apr 2016 17:43:41 GMT", "version": "v1" }, { "created": "Wed, 31 Aug 2016 19:37:47 GMT", "version": "v2" } ]
2016-09-01
[ [ "Fassoni", "Artur C.", "" ], [ "Yang", "Hyun M.", "" ] ]
In this paper we propose an ecological resilience point of view on cancer. This view is based on the analysis of a simple ODE model for the interactions between cancer and normal cells. The model presents two regimes for tumor growth. In the first, cancer arises due to three reasons: a partial corruption of the functions that avoid the growth of mutated cells, an aggressive phenotype of tumor cells and exposure to external carcinogenic factors. In this case, treatments may be effective if they drive the system to the basin of attraction of the cancer cure state. In the second regime, cancer arises because the repair system is intrinsically corrupted. In this case, the complete cure is not possible since the cancer cure state is no more stable, but tumor recurrence may be delayed if treatment is prolongued. We review three indicators of the resilience of a stable equilibrium, related with size and shape of its basin of attraction: latitude, precariousness and resistance. A novel method to calculate these indicators is proposed. This method is simpler and more efficient than those currently used, and may be easily applied to other population dynamics models. We apply this method to the model and investigate how these indicators behave with parameters changes. Finally, we present some simulations to illustrate how the resilience analysis can be applied to validated models in order to obtain indicators for personalized cancer treatments. Keywords: Tumor growth; Chemotherapy; Basins of Attraction; Regime shifts; Critical transitions
q-bio/0501025
Fabio De Blasio
Birgitte Freiesleben De Blasio, Fabio Vittorio De Blasio
Dynamics of competing species in a model of adaptive radiations and macroevolution
null
null
null
null
q-bio.PE
null
We present a simple model of adaptive radiations in evolution based on species competition. Competition is found to promote species divergence and branching, and to dampen the net species production. In the model simulations, high taxonomic diversification and branching take place during the beginning of the radiation. The results show striking similarities with empirical data and highlight the mechanism of competition as an important driving factor for accelerated evolutionary transformation.
[ { "created": "Tue, 18 Jan 2005 19:15:19 GMT", "version": "v1" } ]
2007-05-23
[ [ "De Blasio", "Birgitte Freiesleben", "" ], [ "De Blasio", "Fabio Vittorio", "" ] ]
We present a simple model of adaptive radiations in evolution based on species competition. Competition is found to promote species divergence and branching, and to dampen the net species production. In the model simulations, high taxonomic diversification and branching take place during the beginning of the radiation. The results show striking similarities with empirical data and highlight the mechanism of competition as an important driving factor for accelerated evolutionary transformation.
2206.09398
Alexis Thual
Alexis Thual, Huy Tran, Tatiana Zemskova, Nicolas Courty, R\'emi Flamary, Stanislas Dehaene, Bertrand Thirion
Aligning individual brains with Fused Unbalanced Gromov-Wasserstein
null
Advances in Neural Information Processing Systems, 35 (2022) 21792-21804
null
null
q-bio.NC stat.ML
http://creativecommons.org/licenses/by/4.0/
Individual brains vary in both anatomy and functional organization, even within a given species. Inter-individual variability is a major impediment when trying to draw generalizable conclusions from neuroimaging data collected on groups of subjects. Current co-registration procedures rely on limited data, and thus lead to very coarse inter-subject alignments. In this work, we present a novel method for inter-subject alignment based on Optimal Transport, denoted as Fused Unbalanced Gromov Wasserstein (FUGW). The method aligns cortical surfaces based on the similarity of their functional signatures in response to a variety of stimulation settings, while penalizing large deformations of individual topographic organization. We demonstrate that FUGW is well-suited for whole-brain landmark-free alignment. The unbalanced feature allows to deal with the fact that functional areas vary in size across subjects. Our results show that FUGW alignment significantly increases between-subject correlation of activity for independent functional data, and leads to more precise mapping at the group level.
[ { "created": "Sun, 19 Jun 2022 13:06:11 GMT", "version": "v1" }, { "created": "Tue, 22 Nov 2022 16:38:23 GMT", "version": "v2" }, { "created": "Tue, 22 Aug 2023 23:02:57 GMT", "version": "v3" } ]
2023-09-28
[ [ "Thual", "Alexis", "" ], [ "Tran", "Huy", "" ], [ "Zemskova", "Tatiana", "" ], [ "Courty", "Nicolas", "" ], [ "Flamary", "Rémi", "" ], [ "Dehaene", "Stanislas", "" ], [ "Thirion", "Bertrand", "" ] ]
Individual brains vary in both anatomy and functional organization, even within a given species. Inter-individual variability is a major impediment when trying to draw generalizable conclusions from neuroimaging data collected on groups of subjects. Current co-registration procedures rely on limited data, and thus lead to very coarse inter-subject alignments. In this work, we present a novel method for inter-subject alignment based on Optimal Transport, denoted as Fused Unbalanced Gromov Wasserstein (FUGW). The method aligns cortical surfaces based on the similarity of their functional signatures in response to a variety of stimulation settings, while penalizing large deformations of individual topographic organization. We demonstrate that FUGW is well-suited for whole-brain landmark-free alignment. The unbalanced feature allows to deal with the fact that functional areas vary in size across subjects. Our results show that FUGW alignment significantly increases between-subject correlation of activity for independent functional data, and leads to more precise mapping at the group level.
1509.02450
Olivier Rivoire
S\'ebastien Boyer, Dipanwita Biswas, Ananda Kumar Soshee, Natale Scaramozzino, Cl\'ement Nizak, Olivier Rivoire
Hierarchy and extremes in selections from pools of randomized proteins
null
null
10.1073/pnas.1517813113
null
q-bio.PE q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Variation and selection are the core principles of Darwinian evolution, yet quantitatively relating the diversity of a population to its capacity to respond to selection is challenging. Here, we examine this problem at a molecular level in the context of populations of partially randomized proteins selected for binding to well-defined targets. We built several minimal protein libraries, screened them in vitro by phage display and analyzed their response to selection by high-throughput sequencing. A statistical analysis of the results reveals two main findings: first, libraries with same sequence diversity but built around different "frameworks" typically have vastly different responses, second, the distribution of responses within a library follows a simple scaling law. We show how an elementary probabilistic model based on extreme value theory rationalizes these findings. Our results have implications for designing synthetic protein libraries, for estimating the density of functional biomolecules in sequence space, for characterizing diversity in natural populations and for experimentally investigating the concept of evolvability, or potential for future evolution.
[ { "created": "Tue, 8 Sep 2015 17:14:46 GMT", "version": "v1" } ]
2016-04-27
[ [ "Boyer", "Sébastien", "" ], [ "Biswas", "Dipanwita", "" ], [ "Soshee", "Ananda Kumar", "" ], [ "Scaramozzino", "Natale", "" ], [ "Nizak", "Clément", "" ], [ "Rivoire", "Olivier", "" ] ]
Variation and selection are the core principles of Darwinian evolution, yet quantitatively relating the diversity of a population to its capacity to respond to selection is challenging. Here, we examine this problem at a molecular level in the context of populations of partially randomized proteins selected for binding to well-defined targets. We built several minimal protein libraries, screened them in vitro by phage display and analyzed their response to selection by high-throughput sequencing. A statistical analysis of the results reveals two main findings: first, libraries with same sequence diversity but built around different "frameworks" typically have vastly different responses, second, the distribution of responses within a library follows a simple scaling law. We show how an elementary probabilistic model based on extreme value theory rationalizes these findings. Our results have implications for designing synthetic protein libraries, for estimating the density of functional biomolecules in sequence space, for characterizing diversity in natural populations and for experimentally investigating the concept of evolvability, or potential for future evolution.
2008.04172
Sara Hamis
Sara J Hamis, Fiona R Macfarlane
A single-cell mathematical model of SARS-CoV-2 induced pyroptosis and the effects of anti-inflammatory intervention
35 pages including the appendix, 9 figures in the main manuscript, Supporting Information (PDF) included
null
null
null
q-bio.SC
http://creativecommons.org/licenses/by/4.0/
Pyroptosis is an inflammatory mode of cell death that can contribute to the cytokine storm associated with severe cases of coronavirus disease 2019 (COVID-19). The formation of the NLRP3 inflammasome is central to pyroptosis, which may be induced by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Inflammasome formation, and by extension pyroptosis, may be inhibited by certain anti-inflammatory drugs. In this study, we present a single-cell mathematical model that captures the formation of the NLRP3 inflammasome, pyroptotic cell death and responses to anti-inflammatory intervention that hinder the formation of the NLRP3 inflammasome. The model is formulated in terms of a system of ordinary differential equations (ODEs) that describe the dynamics of the biological components involved in pyroptosis. Our results demonstrate that an anti-inflammatory drug can delay the formation of the NLRP3 inflammasome, and thus may alter the mode of cell death from inflammatory (pyroptosis) to non-inflammatory e.g., apoptosis). The single-cell model is being implemented in a SARS-CoV-2 Tissue Simulator, in collaboration with a multidisciplinary coalition investigating within host-dynamics of COVID-19. In this paper, we provide an overview of the SARS-CoV-2 Tissue Simulator and highlight the effects of pyroptosis on a cellular level.
[ { "created": "Mon, 10 Aug 2020 14:51:34 GMT", "version": "v1" }, { "created": "Wed, 2 Dec 2020 22:13:25 GMT", "version": "v2" }, { "created": "Tue, 23 Mar 2021 15:37:33 GMT", "version": "v3" }, { "created": "Mon, 29 Mar 2021 14:26:51 GMT", "version": "v4" } ]
2021-03-30
[ [ "Hamis", "Sara J", "" ], [ "Macfarlane", "Fiona R", "" ] ]
Pyroptosis is an inflammatory mode of cell death that can contribute to the cytokine storm associated with severe cases of coronavirus disease 2019 (COVID-19). The formation of the NLRP3 inflammasome is central to pyroptosis, which may be induced by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Inflammasome formation, and by extension pyroptosis, may be inhibited by certain anti-inflammatory drugs. In this study, we present a single-cell mathematical model that captures the formation of the NLRP3 inflammasome, pyroptotic cell death and responses to anti-inflammatory intervention that hinder the formation of the NLRP3 inflammasome. The model is formulated in terms of a system of ordinary differential equations (ODEs) that describe the dynamics of the biological components involved in pyroptosis. Our results demonstrate that an anti-inflammatory drug can delay the formation of the NLRP3 inflammasome, and thus may alter the mode of cell death from inflammatory (pyroptosis) to non-inflammatory e.g., apoptosis). The single-cell model is being implemented in a SARS-CoV-2 Tissue Simulator, in collaboration with a multidisciplinary coalition investigating within host-dynamics of COVID-19. In this paper, we provide an overview of the SARS-CoV-2 Tissue Simulator and highlight the effects of pyroptosis on a cellular level.
1305.5369
Daniel Remondini
G. Menichetti, G. Bianconi, E. Giampieri, G. Castellani, D. Remondini
Network Entropy measures applied to different systemic perturbations of cell basal state
NOTE: includes supplementary material
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We characterize different cell states, related to cancer and ageing phenotypes, by a measure of entropy of network ensembles, integrating gene expression values and protein interaction networks. The entropy measure estimates the parameter space available to the network ensemble, that can be interpreted as the level of plasticity of the system for high entropy values (the ability to change its internal parameters, e.g. in response to environmental stimuli), or as a fine tuning of the parameters (that restricts the range of possible parameter values) in the opposite case. This approach can be applied at different scales, from whole cell to single biological functions, by defining appropriate subnetworks based on a priori biological knowledge, thus allowing a deeper understanding of the cell processes involved. In our analysis we used specific network features (degree sequence, subnetwork structure and distance between gene profiles) to obtain informations at different biological scales, providing a novel point of view for the integration of experimental transcriptomic data and a priori biological knowledge, but the entropy measure can also highlight other aspects of the biological systems studied depending on the constraints introduced in the model (e.g. community structures).
[ { "created": "Thu, 23 May 2013 10:29:30 GMT", "version": "v1" } ]
2013-05-24
[ [ "Menichetti", "G.", "" ], [ "Bianconi", "G.", "" ], [ "Giampieri", "E.", "" ], [ "Castellani", "G.", "" ], [ "Remondini", "D.", "" ] ]
We characterize different cell states, related to cancer and ageing phenotypes, by a measure of entropy of network ensembles, integrating gene expression values and protein interaction networks. The entropy measure estimates the parameter space available to the network ensemble, that can be interpreted as the level of plasticity of the system for high entropy values (the ability to change its internal parameters, e.g. in response to environmental stimuli), or as a fine tuning of the parameters (that restricts the range of possible parameter values) in the opposite case. This approach can be applied at different scales, from whole cell to single biological functions, by defining appropriate subnetworks based on a priori biological knowledge, thus allowing a deeper understanding of the cell processes involved. In our analysis we used specific network features (degree sequence, subnetwork structure and distance between gene profiles) to obtain informations at different biological scales, providing a novel point of view for the integration of experimental transcriptomic data and a priori biological knowledge, but the entropy measure can also highlight other aspects of the biological systems studied depending on the constraints introduced in the model (e.g. community structures).
1109.1108
Marco Chierici
Marco Chierici and Giuseppe Jurman and Marco Roncador and Cesare Furlanello
Single-base mismatch profiles for NGS samples
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Within the preprocessing pipeline of a Next Generation Sequencing sample, its set of Single-Base Mismatches is one of the first outcomes, together with the number of correctly aligned reads. The union of these two sets provides a 4x4 matrix (called Single Base Indicator, SBI in what follows) representing a blueprint of the sample and its preprocessing ingredients such as the sequencer, the alignment software, the pipeline parameters. In this note we show that, under the same technological conditions, there is a strong relation between the SBI and the biological nature of the sample. To reach this goal we need to introduce a similarity measure between SBIs: we also show how two measures commonly used in machine learning can be of help in this context.
[ { "created": "Tue, 6 Sep 2011 08:25:14 GMT", "version": "v1" } ]
2011-09-07
[ [ "Chierici", "Marco", "" ], [ "Jurman", "Giuseppe", "" ], [ "Roncador", "Marco", "" ], [ "Furlanello", "Cesare", "" ] ]
Within the preprocessing pipeline of a Next Generation Sequencing sample, its set of Single-Base Mismatches is one of the first outcomes, together with the number of correctly aligned reads. The union of these two sets provides a 4x4 matrix (called Single Base Indicator, SBI in what follows) representing a blueprint of the sample and its preprocessing ingredients such as the sequencer, the alignment software, the pipeline parameters. In this note we show that, under the same technological conditions, there is a strong relation between the SBI and the biological nature of the sample. To reach this goal we need to introduce a similarity measure between SBIs: we also show how two measures commonly used in machine learning can be of help in this context.
1611.06834
Laurent Perrinet
Cesar Ravello (CINV), Maria-Jose Escobar, Adrian Palacios (CINV), Laurent Perrinet (INT)
Differential response of the retinal neural code with respect to the sparseness of natural images
arXiv admin note: substantial text overlap with arXiv:1702.02485
null
10.5281/zenodo.5823016
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Natural images follow statistics inherited by the structure of our physical (visual) environment. In particular, a prominent facet of this structure is that images can be described by a relatively sparse number of features. To investigate the role of this sparseness in the efficiency of the neural code, we designed a new class of random textured stimuli with a controlled sparseness value inspired by measurements of natural images. Then, we tested the impact of this sparseness parameter on the firing pattern observed in a population of retinal ganglion cells recorded ex vivo in the retina of a rodent, the Octodon degus. These recordings showed in particular that the reliability of spike timings varies with respect to the sparseness with globally a similar trend than the distribution of sparseness statistics observed in natural images. These results suggest that the code represented in the spike pattern of ganglion cells may adapt to this aspect of the statistics of natural images.
[ { "created": "Mon, 21 Nov 2016 15:28:16 GMT", "version": "v1" }, { "created": "Wed, 5 Jan 2022 19:53:44 GMT", "version": "v2" } ]
2022-01-07
[ [ "Ravello", "Cesar", "", "CINV" ], [ "Escobar", "Maria-Jose", "", "CINV" ], [ "Palacios", "Adrian", "", "CINV" ], [ "Perrinet", "Laurent", "", "INT" ] ]
Natural images follow statistics inherited by the structure of our physical (visual) environment. In particular, a prominent facet of this structure is that images can be described by a relatively sparse number of features. To investigate the role of this sparseness in the efficiency of the neural code, we designed a new class of random textured stimuli with a controlled sparseness value inspired by measurements of natural images. Then, we tested the impact of this sparseness parameter on the firing pattern observed in a population of retinal ganglion cells recorded ex vivo in the retina of a rodent, the Octodon degus. These recordings showed in particular that the reliability of spike timings varies with respect to the sparseness with globally a similar trend than the distribution of sparseness statistics observed in natural images. These results suggest that the code represented in the spike pattern of ganglion cells may adapt to this aspect of the statistics of natural images.
q-bio/0401041
Thorsten Poeschel
Thorsten Poeschel, Cornelius Froemmel, Christoph Gille
Online tool for the discrimination of equi-distributions
12 pages, 8 figures
BMC Bioinformatics, Vol. 4, 580 (2003)
null
null
q-bio.GN cond-mat.stat-mech
null
For many applications one wishes to decide whether a certain set of numbers originates from an equiprobability distribution or whether they are unequally distributed. Distributions of relative frequencies may deviate significantly from the corresponding probability distributions due to finite sample effects. Hence, it is not trivial to discriminate between an equiprobability distribution and non-equally distributed probabilities when knowing only frequencies. Based on analytical results we provide a software tool which allows to decide whether data correspond to an equiprobability distribution. The tool is available at http://bioinf.charite.de/equifreq/. Its application is demonstrated for the distribution of point mutations in coding genes.
[ { "created": "Wed, 28 Jan 2004 14:06:57 GMT", "version": "v1" } ]
2007-05-23
[ [ "Poeschel", "Thorsten", "" ], [ "Froemmel", "Cornelius", "" ], [ "Gille", "Christoph", "" ] ]
For many applications one wishes to decide whether a certain set of numbers originates from an equiprobability distribution or whether they are unequally distributed. Distributions of relative frequencies may deviate significantly from the corresponding probability distributions due to finite sample effects. Hence, it is not trivial to discriminate between an equiprobability distribution and non-equally distributed probabilities when knowing only frequencies. Based on analytical results we provide a software tool which allows to decide whether data correspond to an equiprobability distribution. The tool is available at http://bioinf.charite.de/equifreq/. Its application is demonstrated for the distribution of point mutations in coding genes.
2004.03806
Dweipayan Goswami Dr.
Priyashi Rao, Arpit Shukla, Paritosh Parmar, Dweipayan Goswami
Proposing a fungal metabolite-Flaviolin as a potential inhibitor of 3CLpro of novel coronavirus SARS-CoV2 using docking and molecular dynamics
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Here after performing docking and molecular dynamics of various small molecules derived as a secondary metabolite from fungi, we propose Flaviolin to act as potent inhibitor of 3-chymotrypsin (3C) like protease (3CLpro) of noval corona virus SARS-CoV2 responsible for pandemic condition caused by coronavirus disease 2019 (COVID-19).
[ { "created": "Wed, 8 Apr 2020 04:37:03 GMT", "version": "v1" } ]
2020-04-09
[ [ "Rao", "Priyashi", "" ], [ "Shukla", "Arpit", "" ], [ "Parmar", "Paritosh", "" ], [ "Goswami", "Dweipayan", "" ] ]
Here after performing docking and molecular dynamics of various small molecules derived as a secondary metabolite from fungi, we propose Flaviolin to act as potent inhibitor of 3-chymotrypsin (3C) like protease (3CLpro) of noval corona virus SARS-CoV2 responsible for pandemic condition caused by coronavirus disease 2019 (COVID-19).
0908.0339
Robert Hilborn
Robert C. Hilborn and Jessie D. Erwin
Stochastic Coherence in an Oscillatory Gene Circuit Model
null
J. Theor. Biology 253 (2008) 349
10.1016/j.jtbi.2008.03.012
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that noise-induced oscillations in a gene circuit model display stochastic coherence, that is, a maximum in the regularity of the oscillations as a function of noise amplitude. The effect is manifest as a system-size effect in a purely stochastic molecular reaction description of the circuit dynamics. We compare the molecular reaction model behavior with that predicted by a rate equation version of the same system. In addition, we show that commonly used reduced models that ignore fast operator reactions do not capture the full stochastic behavior of the gene circuit. Stochastic coherence occurs under conditions that may be physiologically relevant.
[ { "created": "Mon, 3 Aug 2009 20:18:43 GMT", "version": "v1" } ]
2009-08-05
[ [ "Hilborn", "Robert C.", "" ], [ "Erwin", "Jessie D.", "" ] ]
We show that noise-induced oscillations in a gene circuit model display stochastic coherence, that is, a maximum in the regularity of the oscillations as a function of noise amplitude. The effect is manifest as a system-size effect in a purely stochastic molecular reaction description of the circuit dynamics. We compare the molecular reaction model behavior with that predicted by a rate equation version of the same system. In addition, we show that commonly used reduced models that ignore fast operator reactions do not capture the full stochastic behavior of the gene circuit. Stochastic coherence occurs under conditions that may be physiologically relevant.
2309.02343
Stefan Schuster
Stefan Schuster and Tatjana Malycheva
Enumeration of saturated and unsaturated substituted N-heterocycles
11 pages, 4 figures
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Mathematical and computational approaches in chemistry and biochemistry fill a gap in respect to the analysis of the physicochemical features of compounds and their functionality and provide an overview of known as well as yet unknown, but hypothetically possible structures. Nitrogen containing heterocycles such as aziridine, azetidine and pyrrolidine bear a high potential in pharmacology, biotechnology and synthetic biology. Here, we present a mathematical enumeration procedure for all possible azaheterocycles with at least one substituent depending on the number of atoms in the ring, in the sense of saturated and unsaturated congeners. One subgroup belonging to that substance class is constituted by ring-shaped amino acids with a secondary amino group such as proline. A recursion formula is derived, which results in a modified Lucas number series. Moreover, an explicit formula for determining the number of such substances based on the Golden Ratio is given and a second one, based on binomial coefficients, is newly derived. This enumeration is a helpful tool for construction or complementation of virtual compound databases and for computer-assisted chemical synthesis route planning.
[ { "created": "Tue, 5 Sep 2023 15:59:41 GMT", "version": "v1" } ]
2023-09-06
[ [ "Schuster", "Stefan", "" ], [ "Malycheva", "Tatjana", "" ] ]
Mathematical and computational approaches in chemistry and biochemistry fill a gap in respect to the analysis of the physicochemical features of compounds and their functionality and provide an overview of known as well as yet unknown, but hypothetically possible structures. Nitrogen containing heterocycles such as aziridine, azetidine and pyrrolidine bear a high potential in pharmacology, biotechnology and synthetic biology. Here, we present a mathematical enumeration procedure for all possible azaheterocycles with at least one substituent depending on the number of atoms in the ring, in the sense of saturated and unsaturated congeners. One subgroup belonging to that substance class is constituted by ring-shaped amino acids with a secondary amino group such as proline. A recursion formula is derived, which results in a modified Lucas number series. Moreover, an explicit formula for determining the number of such substances based on the Golden Ratio is given and a second one, based on binomial coefficients, is newly derived. This enumeration is a helpful tool for construction or complementation of virtual compound databases and for computer-assisted chemical synthesis route planning.
1706.03839
Aya Kabbara
Aya Kabbara, Mahmoud Hassan, Mohamad Khalil, Wassim El Falou, Hassan Eid
A scalp-EEG network-based analysis of Alzheimer's disease patients at rest
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most brain disorders including Alzheimer's disease (AD) are related to alterations in the normal brain network organization and function. Exploring these network alterations using non-invasive and easy to use technique is a topic of great interest. In this paper, we collected EEG resting-state data from AD patients and healthy control subjects. Functional connectivity between scalp EEG signals was quantified using the phase locking value (PLV) for 6 frequency bands. To assess the differences in network properties, graph-theoretical analysis was performed. AD patients showed decrease of mean connectivity, average clustering and global efficiency in the lower alpha band. Positive correlation between the cognitive score and the extracted graph measures was obtained, suggesting that EEG could be a promising technique to derive new biomarkers of AD diagnosis.
[ { "created": "Mon, 12 Jun 2017 20:27:48 GMT", "version": "v1" } ]
2017-06-14
[ [ "Kabbara", "Aya", "" ], [ "Hassan", "Mahmoud", "" ], [ "Khalil", "Mohamad", "" ], [ "Falou", "Wassim El", "" ], [ "Eid", "Hassan", "" ] ]
Most brain disorders including Alzheimer's disease (AD) are related to alterations in the normal brain network organization and function. Exploring these network alterations using non-invasive and easy to use technique is a topic of great interest. In this paper, we collected EEG resting-state data from AD patients and healthy control subjects. Functional connectivity between scalp EEG signals was quantified using the phase locking value (PLV) for 6 frequency bands. To assess the differences in network properties, graph-theoretical analysis was performed. AD patients showed decrease of mean connectivity, average clustering and global efficiency in the lower alpha band. Positive correlation between the cognitive score and the extracted graph measures was obtained, suggesting that EEG could be a promising technique to derive new biomarkers of AD diagnosis.
2306.11965
Nima Dehghani
Nima Dehghani
Symmetry's Edge in Cortical Dynamics: Multiscale Dynamics of Ensemble Excitation and Inhibition
null
null
null
null
q-bio.NC cond-mat.dis-nn nlin.AO physics.bio-ph
http://creativecommons.org/licenses/by-sa/4.0/
Creating a quantitative theory for the cortex poses several challenges and raises numerous questions. For instance, what are the significant scales of the system? Are they micro, meso or macroscopic? What are the relevant interactions? Are they pairwise, higher order or mean-field? And what are the control parameters? Are they noisy, dissipative or emergent? To tackle these issues, we suggest using an approach akin to what has transformed our understanding of the state of matter. This includes identifying invariances in the ensemble dynamics of various neuron functional classes, searching for order parameters that connect important degrees of freedom and distinguish macroscopic system states, and identifying broken symmetries in the order parameter space to comprehend the emerging laws when many neurons interact and coordinate their activation. By utilizing multielectrode and multiscale neural recordings, we measure the scale-invariant balance between excitatory and inhibitory neurons at a population level, referred to as ensemble E/I balance. This differs from the input E/I balance typically studied at the single-neuron level, focusing instead on the collective behavior of large neural populations. We investigate a set of parameters that can assist us in differentiating between various functional system states (such as the wake/sleep cycle) and pinpointing broken symmetries that serve different information processing and memory functions. Furthermore, we identify broken symmetries that result in pathological states like seizures. This study provides new insights into the multiscale dynamics of excitation and inhibition in cortical networks, advancing our understanding of the underlying principles governing neural computation and dysfunction.
[ { "created": "Wed, 21 Jun 2023 01:38:42 GMT", "version": "v1" }, { "created": "Thu, 4 Jan 2024 23:32:12 GMT", "version": "v2" }, { "created": "Thu, 4 Jul 2024 17:33:21 GMT", "version": "v3" } ]
2024-07-08
[ [ "Dehghani", "Nima", "" ] ]
Creating a quantitative theory for the cortex poses several challenges and raises numerous questions. For instance, what are the significant scales of the system? Are they micro, meso or macroscopic? What are the relevant interactions? Are they pairwise, higher order or mean-field? And what are the control parameters? Are they noisy, dissipative or emergent? To tackle these issues, we suggest using an approach akin to what has transformed our understanding of the state of matter. This includes identifying invariances in the ensemble dynamics of various neuron functional classes, searching for order parameters that connect important degrees of freedom and distinguish macroscopic system states, and identifying broken symmetries in the order parameter space to comprehend the emerging laws when many neurons interact and coordinate their activation. By utilizing multielectrode and multiscale neural recordings, we measure the scale-invariant balance between excitatory and inhibitory neurons at a population level, referred to as ensemble E/I balance. This differs from the input E/I balance typically studied at the single-neuron level, focusing instead on the collective behavior of large neural populations. We investigate a set of parameters that can assist us in differentiating between various functional system states (such as the wake/sleep cycle) and pinpointing broken symmetries that serve different information processing and memory functions. Furthermore, we identify broken symmetries that result in pathological states like seizures. This study provides new insights into the multiscale dynamics of excitation and inhibition in cortical networks, advancing our understanding of the underlying principles governing neural computation and dysfunction.
q-bio/0604004
Giuseppe Gaeta
G. Gaeta
Solitons in Yakushevich-like models of DNA dynamics with improved intrapair potential
null
J. Nonlin. Math. Phys. 14 (2007), 57-81
10.2991/jnmp.2007.14.1.6
null
q-bio.BM
null
The Yakushevich (Y) model provides a very simple pictures of DNA torsion dynamics, yet yields remarkably correct predictions on certain physical characteristics of the dynamics. In the standard Y model, the interaction between bases of a pair is modelled by a harmonic potential, which becomes anharmonic when described in terms of the rotation angles; here we substitute to this different types of improved potentials, providing a more physical description of the H-bond mediated interactions between the bases. We focus in particular on soliton solutions; the Y model predicts the correct size of the nonlinear excitations supposed to model the ``transcription bubbles'', and this is essentially unchanged with the improved potential. Other features of soliton dynamics, in particular curvature of soliton field configurations and the Peierls-Nabarro barrier, are instead significantly changed.
[ { "created": "Tue, 4 Apr 2006 13:33:08 GMT", "version": "v1" } ]
2015-06-26
[ [ "Gaeta", "G.", "" ] ]
The Yakushevich (Y) model provides a very simple pictures of DNA torsion dynamics, yet yields remarkably correct predictions on certain physical characteristics of the dynamics. In the standard Y model, the interaction between bases of a pair is modelled by a harmonic potential, which becomes anharmonic when described in terms of the rotation angles; here we substitute to this different types of improved potentials, providing a more physical description of the H-bond mediated interactions between the bases. We focus in particular on soliton solutions; the Y model predicts the correct size of the nonlinear excitations supposed to model the ``transcription bubbles'', and this is essentially unchanged with the improved potential. Other features of soliton dynamics, in particular curvature of soliton field configurations and the Peierls-Nabarro barrier, are instead significantly changed.
2210.13564
Alfonso Nieto-Castanon
Alfonso Nieto-Castanon
Preparing fMRI Data for Statistical Analysis
null
null
null
null
q-bio.QM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This chapter describes several procedures used to prepare fMRI data for statistical analyses. It includes the description of common preprocessing steps, such as spatial realignment, coregistration, and spatial normalization, aimed at the spatial alignment of all fMRI data within- and between- subjects, as well as several denoising procedures aimed at minimizing the impact of common noise sources, including physiological and residual subject motion effects, on the BOLD signal time series. The chapter ends with a description of quality control procedures recommended for detecting potential problems in the fMRI data and evaluating its suitability for subsequent statistical analyses.
[ { "created": "Mon, 24 Oct 2022 19:38:45 GMT", "version": "v1" } ]
2022-10-26
[ [ "Nieto-Castanon", "Alfonso", "" ] ]
This chapter describes several procedures used to prepare fMRI data for statistical analyses. It includes the description of common preprocessing steps, such as spatial realignment, coregistration, and spatial normalization, aimed at the spatial alignment of all fMRI data within- and between- subjects, as well as several denoising procedures aimed at minimizing the impact of common noise sources, including physiological and residual subject motion effects, on the BOLD signal time series. The chapter ends with a description of quality control procedures recommended for detecting potential problems in the fMRI data and evaluating its suitability for subsequent statistical analyses.
1802.05539
Irina Kareva
Irina Kareva
Using mathematical modeling to ask meaningful biological questions through combination of bifurcation analysis and population heterogeneity
26 pages, 9 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Classical approaches to analyzing dynamical systems, including bifurcation analysis, can provide invaluable insights into underlying structure of a mathematical model, and the spectrum of all possible dynamical behaviors. However, these models frequently fail to take into account population heterogeneity, which, while critically important to understanding and predicting the behavior of any evolving system, is a common simplification that is made in analysis of many mathematical models of ecological systems. Attempts to include population heterogeneity frequently result in expanding system dimensionality, effectively preventing qualitative analysis. Reduction Theorem, or Hidden keystone variable (HKV) method, allows incorporating population heterogeneity while still permitting the use of previously existing classical bifurcation analysis. A combination of these methods allows visualization of evolutionary trajectories and making meaningful predictions about dynamics over time of evolving populations. Here, we discuss three examples of combination of these methods to augment understanding of evolving ecological systems. We demonstrate what new meaningful questions can be asked through this approach, and propose that the large existing literature of fully analyzed models can reveal new and meaningful dynamical behaviors with the application of the HKV-method, if the right questions are asked.
[ { "created": "Thu, 15 Feb 2018 14:15:30 GMT", "version": "v1" } ]
2018-02-16
[ [ "Kareva", "Irina", "" ] ]
Classical approaches to analyzing dynamical systems, including bifurcation analysis, can provide invaluable insights into underlying structure of a mathematical model, and the spectrum of all possible dynamical behaviors. However, these models frequently fail to take into account population heterogeneity, which, while critically important to understanding and predicting the behavior of any evolving system, is a common simplification that is made in analysis of many mathematical models of ecological systems. Attempts to include population heterogeneity frequently result in expanding system dimensionality, effectively preventing qualitative analysis. Reduction Theorem, or Hidden keystone variable (HKV) method, allows incorporating population heterogeneity while still permitting the use of previously existing classical bifurcation analysis. A combination of these methods allows visualization of evolutionary trajectories and making meaningful predictions about dynamics over time of evolving populations. Here, we discuss three examples of combination of these methods to augment understanding of evolving ecological systems. We demonstrate what new meaningful questions can be asked through this approach, and propose that the large existing literature of fully analyzed models can reveal new and meaningful dynamical behaviors with the application of the HKV-method, if the right questions are asked.
1209.0312
Spyros Papageorgiou Dr
Spyros Papageorgiou
An explanation of unexpected Hoxd expressions in mutant mice
11 pages, 2 figures
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Hox gene collinearity enigma has often been approached using models based on biomolecular mechanisms. The biophysical model, is an alternative approach, speculating that collinearity is caused by physical forces pulling the Hox clusters from a territory where they are inactive to a distinct spatial domain where they are activated in a step by step manner. Hox gene translocations have recently been observed in support of the biophysical model. Furthermore, genetic engineering experiments, performed in embryonic mice, gave rise to some unexpected mutant expressions that biomolecular models could not predict. In several cases when anterior Hoxd genes are deleted, the expression of the genes whose expression is probed in the mutants are impossible to anticipate. On the contrary, the biophysical model offers convincing explanation. All these experimental results support the idea of physical forces being responsible for Hox gene collinearity. In order to test the validity of the various models further, certain experiment involving gene deletions are proposed. The biophysical and biomolecular models predict different results for these experiments, hence the expected outcome will confirm or question the validity of these models.
[ { "created": "Mon, 3 Sep 2012 11:41:06 GMT", "version": "v1" } ]
2012-09-04
[ [ "Papageorgiou", "Spyros", "" ] ]
The Hox gene collinearity enigma has often been approached using models based on biomolecular mechanisms. The biophysical model, is an alternative approach, speculating that collinearity is caused by physical forces pulling the Hox clusters from a territory where they are inactive to a distinct spatial domain where they are activated in a step by step manner. Hox gene translocations have recently been observed in support of the biophysical model. Furthermore, genetic engineering experiments, performed in embryonic mice, gave rise to some unexpected mutant expressions that biomolecular models could not predict. In several cases when anterior Hoxd genes are deleted, the expression of the genes whose expression is probed in the mutants are impossible to anticipate. On the contrary, the biophysical model offers convincing explanation. All these experimental results support the idea of physical forces being responsible for Hox gene collinearity. In order to test the validity of the various models further, certain experiment involving gene deletions are proposed. The biophysical and biomolecular models predict different results for these experiments, hence the expected outcome will confirm or question the validity of these models.
2110.04892
Eitan Asher Mr.
Eitan E. Asher, Maya Slovik, Rae Mitelman, Hagai Bergman, Shlomo Havlin and Shay Moshel
Local Field Potential Journey into the Basal Ganglia
null
null
null
null
q-bio.NC physics.data-an
http://creativecommons.org/licenses/by/4.0/
Local Field potential (LFP) in the basal ganglia (BG) nuclei in the brain have attracted much research and clinical interest. However, the origin of this signal is still under debate throughout the last decades. The question is whether it is a local subthreshold phenomenon, synaptic input to neurons or it is a flow of electrical signals merged as volume conduction which are generated from simultaneous firing neurons in the cerebral cortex and obeys the Maxwell equations. In this study, we recorded in a monkey brain simultaneously LFP's from the cerebral cortex, in the frontal lobe and primary motor cortex (M1) and in sites in all BG nuclei: the striatum, globus pallidus, and subthalamic nucleus. All the records were taken from human primate model (vervet monkey), during spontaneous activity. Developing and applying a novel method to identify significant cross-correlations (potential links) while removing "spurious" correlations, we found a tool that may discriminate between the two major phenomena of synaptic inputs (as we define as information flow) and volume conduction. We find mainly two major paths flows of field potential, that propagates with two different time delays, from the primary motor cortex, and from the frontal cortex. Our results indicate that the two path flows may represent the two mechanisms of volume conduction and information flow.
[ { "created": "Sun, 10 Oct 2021 20:17:53 GMT", "version": "v1" }, { "created": "Mon, 6 Dec 2021 14:29:34 GMT", "version": "v2" }, { "created": "Wed, 16 Feb 2022 11:17:18 GMT", "version": "v3" }, { "created": "Wed, 19 Oct 2022 05:59:11 GMT", "version": "v4" } ]
2022-10-20
[ [ "Asher", "Eitan E.", "" ], [ "Slovik", "Maya", "" ], [ "Mitelman", "Rae", "" ], [ "Bergman", "Hagai", "" ], [ "Havlin", "Shlomo", "" ], [ "Moshel", "Shay", "" ] ]
Local Field potential (LFP) in the basal ganglia (BG) nuclei in the brain have attracted much research and clinical interest. However, the origin of this signal is still under debate throughout the last decades. The question is whether it is a local subthreshold phenomenon, synaptic input to neurons or it is a flow of electrical signals merged as volume conduction which are generated from simultaneous firing neurons in the cerebral cortex and obeys the Maxwell equations. In this study, we recorded in a monkey brain simultaneously LFP's from the cerebral cortex, in the frontal lobe and primary motor cortex (M1) and in sites in all BG nuclei: the striatum, globus pallidus, and subthalamic nucleus. All the records were taken from human primate model (vervet monkey), during spontaneous activity. Developing and applying a novel method to identify significant cross-correlations (potential links) while removing "spurious" correlations, we found a tool that may discriminate between the two major phenomena of synaptic inputs (as we define as information flow) and volume conduction. We find mainly two major paths flows of field potential, that propagates with two different time delays, from the primary motor cortex, and from the frontal cortex. Our results indicate that the two path flows may represent the two mechanisms of volume conduction and information flow.
1905.12570
James Yearsley
James M Yearsley and Jonathan J Halliwell
Contextuality in Human Decision Making in the Presence of Direct Influences: A Comment on Basieva et al. (2019)
5 pages. V2: Substantial Revisions
null
null
null
q-bio.NC quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a recent paper Basieva, Cervantes, Dzhafarov, and Khrennikov (2019) presented a series of experiments which they claimed show evidence for contextuality in human judgments. This was based on a set of modified Bell-like inequalities designed to rule out effects caused by signalling. In this comment we show that it is, however, possible to construct a non-contextual model which explains the experimental data via direct influences, which we take to mean that a measurement outcome has a (model-specific) causal dependence on other measurements. We trace the apparent inconsistency to a definition of signalling which does not account for all possible forms of direct influence. Further, we cast doubt on the idea that any experimental data in psychology could provide conclusive evidence for contextuality beyond that explainable by direct influence.
[ { "created": "Tue, 7 May 2019 14:19:59 GMT", "version": "v1" }, { "created": "Fri, 11 Oct 2019 12:16:36 GMT", "version": "v2" } ]
2019-10-14
[ [ "Yearsley", "James M", "" ], [ "Halliwell", "Jonathan J", "" ] ]
In a recent paper Basieva, Cervantes, Dzhafarov, and Khrennikov (2019) presented a series of experiments which they claimed show evidence for contextuality in human judgments. This was based on a set of modified Bell-like inequalities designed to rule out effects caused by signalling. In this comment we show that it is, however, possible to construct a non-contextual model which explains the experimental data via direct influences, which we take to mean that a measurement outcome has a (model-specific) causal dependence on other measurements. We trace the apparent inconsistency to a definition of signalling which does not account for all possible forms of direct influence. Further, we cast doubt on the idea that any experimental data in psychology could provide conclusive evidence for contextuality beyond that explainable by direct influence.
1611.05137
Takahiro Ezaki
Takahiro Ezaki, Takamitsu Watanabe, Masayuki Ohzeki, Naoki Masuda
Energy landscape analysis of neuroimaging data
22 pages, 4 figures, 1 table
Phil. Trans. R. Soc. A 375, 20160287 (2017)
10.1098/rsta.2016.0287
null
q-bio.NC cond-mat.stat-mech physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computational neuroscience models have been used for understanding neural dynamics in the brain and how they may be altered when physiological or other conditions change. We review and develop a data-driven approach to neuroimaging data called the energy landscape analysis. The methods are rooted in statistical physics theory, in particular the Ising model, also known as the (pairwise) maximum entropy model and Boltzmann machine. The methods have been applied to fitting electrophysiological data in neuroscience for a decade, but their use in neuroimaging data is still in its infancy. We first review the methods and discuss some algorithms and technical aspects. Then, we apply the methods to functional magnetic resonance imaging data recorded from healthy individuals to inspect the relationship between the accuracy of fitting, the size of the brain system to be analyzed, and the data length.
[ { "created": "Wed, 16 Nov 2016 04:17:12 GMT", "version": "v1" }, { "created": "Thu, 25 May 2017 14:01:27 GMT", "version": "v2" } ]
2017-05-26
[ [ "Ezaki", "Takahiro", "" ], [ "Watanabe", "Takamitsu", "" ], [ "Ohzeki", "Masayuki", "" ], [ "Masuda", "Naoki", "" ] ]
Computational neuroscience models have been used for understanding neural dynamics in the brain and how they may be altered when physiological or other conditions change. We review and develop a data-driven approach to neuroimaging data called the energy landscape analysis. The methods are rooted in statistical physics theory, in particular the Ising model, also known as the (pairwise) maximum entropy model and Boltzmann machine. The methods have been applied to fitting electrophysiological data in neuroscience for a decade, but their use in neuroimaging data is still in its infancy. We first review the methods and discuss some algorithms and technical aspects. Then, we apply the methods to functional magnetic resonance imaging data recorded from healthy individuals to inspect the relationship between the accuracy of fitting, the size of the brain system to be analyzed, and the data length.
1305.3544
Florian Hartig
Florian Hartig and Carsten F. Dormann
Does "model-free" forecasting really outperform the "true" model? A reply to Perretti et al
Letter submitted to PNAS, with additional supplementary information. R code included in the latex source
Proceedings of the National Academy of Sciences, 110, E3975, 2013
10.1073/pnas.1308603110
null
q-bio.PE nlin.CD stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Estimating population models from uncertain observations is an important problem in ecology. Perretti et al. observed that standard Bayesian state-space solutions to this problem may provide biased parameter estimates when the underlying dynamics are chaotic. Consequently, forecasts based on these estimates showed poor predictive accuracy compared to simple "model-free" methods, which lead Perretti et al. to conclude that "Model-free forecasting outperforms the correct mechanistic model for simulated and experimental data". However, a simple modification of the statistical methods also suffices to remove the bias and reverse their results.
[ { "created": "Wed, 15 May 2013 17:01:13 GMT", "version": "v1" } ]
2013-10-28
[ [ "Hartig", "Florian", "" ], [ "Dormann", "Carsten F.", "" ] ]
Estimating population models from uncertain observations is an important problem in ecology. Perretti et al. observed that standard Bayesian state-space solutions to this problem may provide biased parameter estimates when the underlying dynamics are chaotic. Consequently, forecasts based on these estimates showed poor predictive accuracy compared to simple "model-free" methods, which lead Perretti et al. to conclude that "Model-free forecasting outperforms the correct mechanistic model for simulated and experimental data". However, a simple modification of the statistical methods also suffices to remove the bias and reverse their results.
2104.01589
Guy Katriel
Guy Katriel
Dispersal-induced growth in a time-periodic environment
null
J. Math. Biol. 85, 24 (2022)
10.1007/s00285-022-01791-7
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dispersal-induced growth (DIG) occurs when two populations with time-varying growth rates, each of which, when isolated, would become extinct, are able to persist and grow exponentially when dispersal among the two populations is present. This work provides a mathematical exploration of this surprising phenomenon, in the context of a deterministic model with periodic variation of growth rates, and characterizes the factors which are important in generating the DIG effect and the corresponding conditions on the parameters involved.
[ { "created": "Sun, 4 Apr 2021 11:29:53 GMT", "version": "v1" }, { "created": "Thu, 8 Apr 2021 14:52:42 GMT", "version": "v2" }, { "created": "Sat, 10 Jul 2021 05:02:07 GMT", "version": "v3" }, { "created": "Thu, 28 Apr 2022 05:14:08 GMT", "version": "v4" } ]
2022-08-31
[ [ "Katriel", "Guy", "" ] ]
Dispersal-induced growth (DIG) occurs when two populations with time-varying growth rates, each of which, when isolated, would become extinct, are able to persist and grow exponentially when dispersal among the two populations is present. This work provides a mathematical exploration of this surprising phenomenon, in the context of a deterministic model with periodic variation of growth rates, and characterizes the factors which are important in generating the DIG effect and the corresponding conditions on the parameters involved.
q-bio/0703001
Sidney Redner
T. Antal, P. L. Krapivsky, S. Redner, M. Mailman, B. Chakraborty
Dynamics of Microtubule Growth and Catastrophe
12 pages, 6 figures, 2-column revtex4; version 2: published version for PRE; contains various small changes in response to referee comments
Phys. Rev. E 76, 041907 (2007)
10.1103/PhysRevE.76.041907
null
q-bio.QM cond-mat.stat-mech physics.bio-ph q-bio.BM
null
We investigate a simple model of microtubule dynamics in which a microtubule evolves by: (i) attachment of guanosine triphosphate (GTP) to its end at rate lambda, (ii) GTP converting irreversibly to guanosine diphosphate (GDP) at rate 1, and (iii) detachment of GDP from the end of a microtubule at rate mu. As a function of these elemental rates, the microtubule can grow steadily or its length can fluctuate wildly. A master equation approach is developed to characterize these intriguing features. For mu=0, we find exact expressions for tubule and GTP cap length distributions, as well as a power-law length distributions of GTP and GDP islands. For mu=oo, we find the average time between catastrophes, where the microtubule shrinks to zero length, and extend this approach to also determine the size distribution of avalanches (sequence of consecutive GDP detachment events). We obtain the phase diagram for general rates and verify our predictions by numerical simulations.
[ { "created": "Thu, 1 Mar 2007 19:41:08 GMT", "version": "v1" }, { "created": "Tue, 22 Apr 2008 23:55:13 GMT", "version": "v2" } ]
2008-04-23
[ [ "Antal", "T.", "" ], [ "Krapivsky", "P. L.", "" ], [ "Redner", "S.", "" ], [ "Mailman", "M.", "" ], [ "Chakraborty", "B.", "" ] ]
We investigate a simple model of microtubule dynamics in which a microtubule evolves by: (i) attachment of guanosine triphosphate (GTP) to its end at rate lambda, (ii) GTP converting irreversibly to guanosine diphosphate (GDP) at rate 1, and (iii) detachment of GDP from the end of a microtubule at rate mu. As a function of these elemental rates, the microtubule can grow steadily or its length can fluctuate wildly. A master equation approach is developed to characterize these intriguing features. For mu=0, we find exact expressions for tubule and GTP cap length distributions, as well as a power-law length distributions of GTP and GDP islands. For mu=oo, we find the average time between catastrophes, where the microtubule shrinks to zero length, and extend this approach to also determine the size distribution of avalanches (sequence of consecutive GDP detachment events). We obtain the phase diagram for general rates and verify our predictions by numerical simulations.
1502.02442
Angelika Manhart
Angelika Manhart, Christian Schmeiser, Nikolaos Sfakianakis, Dietmar Oelz
An Extended Filament Based Lamellipodium Model Produces Various Moving Cell Shapes in the Presence of Chemotactic Signals
null
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Filament Based Lamellipodium Model (FBLM) is a two-phase two-dimensional continuum model, describing the dynamcis of two interacting families of locally parallel actin filaments (C.Schmeiser and D.Oelz, How do cells move? Mathematical modeling of cytoskeleton dynamics and cell migration. Cell mechanics: from single scale-based models to multiscale modeling. Chapman and Hall, 2010). It contains accounts of the filaments' bending stiffness, of adhesion to the substrate, and of cross-links connecting the two families. An extension of the model is presented with contributions from nucleation of filaments by branching, from capping, from contraction by actin-myosin interaction, and from a pressure-like repulsion between parallel filaments due to Coulomb interaction. The effect of a chemoattractant is described by a simple signal transduction model influencing the polymerization speed. Simulations with the extended model show its potential for describing various moving cell shapes, depending on the signal transduction procedure, and for predicting transients between nonmoving and moving states as well as changes of direction.
[ { "created": "Mon, 9 Feb 2015 11:26:13 GMT", "version": "v1" } ]
2015-02-10
[ [ "Manhart", "Angelika", "" ], [ "Schmeiser", "Christian", "" ], [ "Sfakianakis", "Nikolaos", "" ], [ "Oelz", "Dietmar", "" ] ]
The Filament Based Lamellipodium Model (FBLM) is a two-phase two-dimensional continuum model, describing the dynamcis of two interacting families of locally parallel actin filaments (C.Schmeiser and D.Oelz, How do cells move? Mathematical modeling of cytoskeleton dynamics and cell migration. Cell mechanics: from single scale-based models to multiscale modeling. Chapman and Hall, 2010). It contains accounts of the filaments' bending stiffness, of adhesion to the substrate, and of cross-links connecting the two families. An extension of the model is presented with contributions from nucleation of filaments by branching, from capping, from contraction by actin-myosin interaction, and from a pressure-like repulsion between parallel filaments due to Coulomb interaction. The effect of a chemoattractant is described by a simple signal transduction model influencing the polymerization speed. Simulations with the extended model show its potential for describing various moving cell shapes, depending on the signal transduction procedure, and for predicting transients between nonmoving and moving states as well as changes of direction.
1405.7963
Fabien Campillo
Coralie Fritsch (INRIA Sophia Antipolis, MISTEA, I3M), J\'er\^ome Harmand (INRIA Sophia Antipolis, LBE), Fabien Campillo (INRIA Sophia Antipolis, MISTEA)
A modeling approach of the chemostat
arXiv admin note: substantial text overlap with arXiv:1308.2411
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Population dynamics and in particular microbial population dynamics, though they are complex but also intrinsically discrete and random, are conventionally represented as deterministic differential equations systems. We propose to revisit this approach by complementing these classic formalisms by stochastic formalisms and to explain the links between these representations in terms of mathematical analysis but also in terms of modeling and numerical simulations. We illustrate this approach on the model of chemostat.
[ { "created": "Fri, 30 May 2014 19:30:15 GMT", "version": "v1" } ]
2014-06-02
[ [ "Fritsch", "Coralie", "", "INRIA Sophia Antipolis, MISTEA, I3M" ], [ "Harmand", "Jérôme", "", "INRIA Sophia Antipolis, LBE" ], [ "Campillo", "Fabien", "", "INRIA Sophia\n Antipolis, MISTEA" ] ]
Population dynamics and in particular microbial population dynamics, though they are complex but also intrinsically discrete and random, are conventionally represented as deterministic differential equations systems. We propose to revisit this approach by complementing these classic formalisms by stochastic formalisms and to explain the links between these representations in terms of mathematical analysis but also in terms of modeling and numerical simulations. We illustrate this approach on the model of chemostat.
1608.00108
Roland Kr\"amer
Ulrich Warttinger, Christina Giese, Job Harenberg, Roland Kr\"amer
Direct quantification of brown algae-derived fucoidans in human plasma by a fluorescent probe assay
article, 15 pages, 2 schemes, 3 figures, 3 tables
null
null
null
q-bio.QM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fucoidan is a generic term for a class of fucose rich, structurally diverse sulfated polysaccharides that are found in brown algae and other marine organisms. Depending on the species from which the fucoidan is extracted, a wide variety of biological activities including antitumor, antiinflammatory, immune-modulating, antiviral, antibacterial and pro- and anticoagulant activities has been described. Fucoidans have the advantage of low toxicity and oral bioavailibiity and are viable drug candidates, preclinical and pilot clinical trials show promising results. The availability of robust assays, in particular for analysing the blood levels of fucoidan, is a fundamental requirement for pharmacokinetic analysis in drug development projects. This contribution describes the application of a commercially availbale, protein-free fluorescent probe assay (Heparin Red) for the direct quantification of several fucoidans (from Fucus vesiculosus, Macrocystis pyrifera, and Undaria pinnatifida) in human plasma. By only minor adapation of the established protocol for heparin detection, a concentration range 0,5 to 20 microgram per mL fucoidan can be addressed. A preliminary analysis of matrix effects suggests acceptable interindividual variability and no interference by endogeneous chondroitin sulfate. This study identifies the Heparin Red assay as a simple, time-saving mix-and-read method for the quantification of fucoidans in human plasma.
[ { "created": "Sat, 30 Jul 2016 12:06:19 GMT", "version": "v1" } ]
2016-08-02
[ [ "Warttinger", "Ulrich", "" ], [ "Giese", "Christina", "" ], [ "Harenberg", "Job", "" ], [ "Krämer", "Roland", "" ] ]
Fucoidan is a generic term for a class of fucose rich, structurally diverse sulfated polysaccharides that are found in brown algae and other marine organisms. Depending on the species from which the fucoidan is extracted, a wide variety of biological activities including antitumor, antiinflammatory, immune-modulating, antiviral, antibacterial and pro- and anticoagulant activities has been described. Fucoidans have the advantage of low toxicity and oral bioavailibiity and are viable drug candidates, preclinical and pilot clinical trials show promising results. The availability of robust assays, in particular for analysing the blood levels of fucoidan, is a fundamental requirement for pharmacokinetic analysis in drug development projects. This contribution describes the application of a commercially availbale, protein-free fluorescent probe assay (Heparin Red) for the direct quantification of several fucoidans (from Fucus vesiculosus, Macrocystis pyrifera, and Undaria pinnatifida) in human plasma. By only minor adapation of the established protocol for heparin detection, a concentration range 0,5 to 20 microgram per mL fucoidan can be addressed. A preliminary analysis of matrix effects suggests acceptable interindividual variability and no interference by endogeneous chondroitin sulfate. This study identifies the Heparin Red assay as a simple, time-saving mix-and-read method for the quantification of fucoidans in human plasma.
2201.08714
Shuwen Yang
Shuwen Yang, Tianyu Wen, Ziyao Li and Guojie Song
Equivalent Distance Geometry Error for Molecular Conformation Comparison
null
null
null
null
q-bio.BM cs.LG physics.chem-ph
http://creativecommons.org/licenses/by/4.0/
Straight-forward conformation generation models, which generate 3-D structures directly from input molecular graphs, play an important role in various molecular tasks with machine learning, such as 3D-QSAR and virtual screening in drug design. However, existing loss functions in these models either cost overmuch time or fail to guarantee the equivalence during optimization, which means treating different items unfairly, resulting in poor local geometry in generated conformation. So, we propose Equivalent Distance Geometry Error (EDGE) to calculate the differential discrepancy between conformations where the essential factors of three kinds in conformation geometry (i.e. bond lengths, bond angles and dihedral angles) are equivalently optimized with certain weights. And in the improved version of our method, the optimization features minimizing linear transformations of atom-pair distances within 3-hop. Extensive experiments show that, compared with existing loss functions, EDGE performs effectively and efficiently in two tasks under the same backbones.
[ { "created": "Sat, 13 Nov 2021 09:04:55 GMT", "version": "v1" }, { "created": "Tue, 15 Mar 2022 04:39:32 GMT", "version": "v2" } ]
2022-03-16
[ [ "Yang", "Shuwen", "" ], [ "Wen", "Tianyu", "" ], [ "Li", "Ziyao", "" ], [ "Song", "Guojie", "" ] ]
Straight-forward conformation generation models, which generate 3-D structures directly from input molecular graphs, play an important role in various molecular tasks with machine learning, such as 3D-QSAR and virtual screening in drug design. However, existing loss functions in these models either cost overmuch time or fail to guarantee the equivalence during optimization, which means treating different items unfairly, resulting in poor local geometry in generated conformation. So, we propose Equivalent Distance Geometry Error (EDGE) to calculate the differential discrepancy between conformations where the essential factors of three kinds in conformation geometry (i.e. bond lengths, bond angles and dihedral angles) are equivalently optimized with certain weights. And in the improved version of our method, the optimization features minimizing linear transformations of atom-pair distances within 3-hop. Extensive experiments show that, compared with existing loss functions, EDGE performs effectively and efficiently in two tasks under the same backbones.
2204.11843
Yuanxiang Gao
Yuanxiang Gao
A Computational Theory of Learning Flexible Reward-Seeking Behavior with Place Cells
14 pages, 23 figures
null
null
null
q-bio.NC cs.AI cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An important open question in computational neuroscience is how various spatially tuned neurons, such as place cells, are used to support the learning of reward-seeking behavior of an animal. Existing computational models either lack biological plausibility or fall short of behavioral flexibility when environments change. In this paper, we propose a computational theory that achieves behavioral flexibility with better biological plausibility. We first train a mixture of Gaussian distributions to model the ensemble of firing fields of place cells. Then we propose a Hebbian-like rule to learn the synaptic strength matrix among place cells. This matrix is interpreted as the transition rate matrix of a continuous time Markov chain to generate the sequential replay of place cells. During replay, the synaptic strengths from place cells to medium spiny neurons (MSN) are learned by a temporal-difference like rule to store place-reward associations. After replay, the activation of MSN will ramp up when an animal approaches the rewarding place, so the animal can move along the direction where the MSN activation is increasing to find the rewarding place. We implement our theory into a high-fidelity virtual rat in the MuJoCo physics simulator. In a complex maze, the rat shows significantly better learning efficiency and behavioral flexibility than a rat that implements a neuroscience-inspired reinforcement learning algorithm, deep Q-network.
[ { "created": "Fri, 22 Apr 2022 16:06:44 GMT", "version": "v1" }, { "created": "Tue, 17 May 2022 05:39:54 GMT", "version": "v2" } ]
2022-05-18
[ [ "Gao", "Yuanxiang", "" ] ]
An important open question in computational neuroscience is how various spatially tuned neurons, such as place cells, are used to support the learning of reward-seeking behavior of an animal. Existing computational models either lack biological plausibility or fall short of behavioral flexibility when environments change. In this paper, we propose a computational theory that achieves behavioral flexibility with better biological plausibility. We first train a mixture of Gaussian distributions to model the ensemble of firing fields of place cells. Then we propose a Hebbian-like rule to learn the synaptic strength matrix among place cells. This matrix is interpreted as the transition rate matrix of a continuous time Markov chain to generate the sequential replay of place cells. During replay, the synaptic strengths from place cells to medium spiny neurons (MSN) are learned by a temporal-difference like rule to store place-reward associations. After replay, the activation of MSN will ramp up when an animal approaches the rewarding place, so the animal can move along the direction where the MSN activation is increasing to find the rewarding place. We implement our theory into a high-fidelity virtual rat in the MuJoCo physics simulator. In a complex maze, the rat shows significantly better learning efficiency and behavioral flexibility than a rat that implements a neuroscience-inspired reinforcement learning algorithm, deep Q-network.
2310.17226
Francois Boue
Maja Napieraj (MMB), Annie Br\^ulet (MMB), Javier Perez, Fran\c{c}ois Bou\'e (MMB), Evelyne Lutton
In situ digestion of canola protein gel observed by synchrotron X-Ray Scattering
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address the issue of structure changes of a canola protein gel (as a solid food model) during gastrointestinal digestion. We present a method for synchrotron Small-Angle X-ray Scattering analysis of the digestion of a gel in a capillary. Scanning the capillary allows tracking the digestion under diffusion of enzymatic juices. The fitting parameters characterizing the sizes, scattering intensities and structures allow to distinguish the compact, unfolded or aggregated states of proteins. The evolutions of these parameters enable to detail the complex changes of proteins during gel digestion, involving back-and-forth evolutions with proteins unfolding (1 st and 3 rd steps), re-compaction (2 nd step) due to gastrointestinal pH and enzyme actions, before final protein scissions (4 th step) resulting in small peptides. This complexity is related to the wide ranges of successive pH and enzyme activity acting on large and charged protein assemblies. Digestion is therefore impacted by the conditions of food preparation.
[ { "created": "Thu, 26 Oct 2023 08:24:15 GMT", "version": "v1" } ]
2023-10-27
[ [ "Napieraj", "Maja", "", "MMB" ], [ "Brûlet", "Annie", "", "MMB" ], [ "Perez", "Javier", "", "MMB" ], [ "Boué", "François", "", "MMB" ], [ "Lutton", "Evelyne", "" ] ]
We address the issue of structure changes of a canola protein gel (as a solid food model) during gastrointestinal digestion. We present a method for synchrotron Small-Angle X-ray Scattering analysis of the digestion of a gel in a capillary. Scanning the capillary allows tracking the digestion under diffusion of enzymatic juices. The fitting parameters characterizing the sizes, scattering intensities and structures allow to distinguish the compact, unfolded or aggregated states of proteins. The evolutions of these parameters enable to detail the complex changes of proteins during gel digestion, involving back-and-forth evolutions with proteins unfolding (1 st and 3 rd steps), re-compaction (2 nd step) due to gastrointestinal pH and enzyme actions, before final protein scissions (4 th step) resulting in small peptides. This complexity is related to the wide ranges of successive pH and enzyme activity acting on large and charged protein assemblies. Digestion is therefore impacted by the conditions of food preparation.
1309.2055
Tomasz Rutkowski
Tomasz M. Rutkowski
Beyond visual P300 based brain-computer interfacing paradigms
7 pages, 5 figures, Proceedings of the Third Postgraduate Consortium International Workshop on Innovations in Information and Communication Science and Technology, (E. Cooper, G. A. Kobzev, A. F. Uvarov, and V. V. Kryssanov, eds.), (Tomsk, Russia), pp. 277-283, TUSUR and Ritsumeikan, September 2-5, 2013. ISBN 978-5-86889-7
null
null
null
q-bio.NC cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper reviews and summarizes recent developments in spatial auditory and tactile brain-computer interfacing neurotechology applications. It serves as the latest developments summary in "non-visual" brain-computer interfacing solutions presented in a tutorial delivered by the author at the IICST 2013 workshop. The novel concepts of unimodal auditory or tactile, as well as a bimodal combined paradigms are described and supported with recent research results from our BCI-lab research group at Life Science Center, University of Tsukuba, Japan. The newly developed experimental paradigms fit perfectly to needs of paralyzed or hearing impaired users, in case of tactile stimulus, as well as for able users who cannot utilize vision in computer or machine interaction (driving or operation of machinery required not disturbed eyesight). We present and review the EEG event related potential responses useful for brain computer interfacing applications beyond state-of-the-art visual paradigms. In conclusion the recent results are discussed and suggestions for further applications are drawn.
[ { "created": "Mon, 9 Sep 2013 07:18:17 GMT", "version": "v1" } ]
2013-09-10
[ [ "Rutkowski", "Tomasz M.", "" ] ]
The paper reviews and summarizes recent developments in spatial auditory and tactile brain-computer interfacing neurotechology applications. It serves as the latest developments summary in "non-visual" brain-computer interfacing solutions presented in a tutorial delivered by the author at the IICST 2013 workshop. The novel concepts of unimodal auditory or tactile, as well as a bimodal combined paradigms are described and supported with recent research results from our BCI-lab research group at Life Science Center, University of Tsukuba, Japan. The newly developed experimental paradigms fit perfectly to needs of paralyzed or hearing impaired users, in case of tactile stimulus, as well as for able users who cannot utilize vision in computer or machine interaction (driving or operation of machinery required not disturbed eyesight). We present and review the EEG event related potential responses useful for brain computer interfacing applications beyond state-of-the-art visual paradigms. In conclusion the recent results are discussed and suggestions for further applications are drawn.
1208.1604
Sanzo Miyazawa
Sanzo Miyazawa
Inference of Co-Evolving Site Pairs: an Excellent Predictor of Contact Residue Pairs in Protein 3D structures
17 pages, 4 figures, and 4 tables with supplementary information of 5 figures
PLoS ONE 8(1): e54252, 2013
10.1371/journal.pone.0054252
null
q-bio.BM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Residue-residue interactions that fold a protein into a unique three-dimensional structure and make it play a specific function impose structural and functional constraints on each residue site. Selective constraints on residue sites are recorded in amino acid orders in homologous sequences and also in the evolutionary trace of amino acid substitutions. A challenge is to extract direct dependences between residue sites by removing indirect dependences through other residues within a protein or even through other molecules. Recent attempts of disentangling direct from indirect dependences of amino acid types between residue positions in multiple sequence alignments have revealed that the strength of inferred residue pair couplings is an excellent predictor of residue-residue proximity in folded structures. Here, we report an alternative attempt of inferring co-evolving site pairs from concurrent and compensatory substitutions between sites in each branch of a phylogenetic tree. First, branch lengths of a phylogenetic tree inferred by the neighbor-joining method are optimized as well as other parameters by maximizing a likelihood of the tree in a mechanistic codon substitution model. Mean changes of quantities, which are characteristic of concurrent and compensatory substitutions, accompanied by substitutions at each site in each branch of the tree are estimated with the likelihood of each substitution. Partial correlation coefficients of the characteristic changes along branches between sites are calculated and used to rank co-evolving site pairs. Accuracy of contact prediction based on the present co-evolution score is comparable to that achieved by a maximum entropy model of protein sequences for 15 protein families taken from the Pfam release 26.0. Besides, this excellent accuracy indicates that compensatory substitutions are significant in protein evolution.
[ { "created": "Wed, 8 Aug 2012 07:50:05 GMT", "version": "v1" } ]
2013-01-18
[ [ "Miyazawa", "Sanzo", "" ] ]
Residue-residue interactions that fold a protein into a unique three-dimensional structure and make it play a specific function impose structural and functional constraints on each residue site. Selective constraints on residue sites are recorded in amino acid orders in homologous sequences and also in the evolutionary trace of amino acid substitutions. A challenge is to extract direct dependences between residue sites by removing indirect dependences through other residues within a protein or even through other molecules. Recent attempts of disentangling direct from indirect dependences of amino acid types between residue positions in multiple sequence alignments have revealed that the strength of inferred residue pair couplings is an excellent predictor of residue-residue proximity in folded structures. Here, we report an alternative attempt of inferring co-evolving site pairs from concurrent and compensatory substitutions between sites in each branch of a phylogenetic tree. First, branch lengths of a phylogenetic tree inferred by the neighbor-joining method are optimized as well as other parameters by maximizing a likelihood of the tree in a mechanistic codon substitution model. Mean changes of quantities, which are characteristic of concurrent and compensatory substitutions, accompanied by substitutions at each site in each branch of the tree are estimated with the likelihood of each substitution. Partial correlation coefficients of the characteristic changes along branches between sites are calculated and used to rank co-evolving site pairs. Accuracy of contact prediction based on the present co-evolution score is comparable to that achieved by a maximum entropy model of protein sequences for 15 protein families taken from the Pfam release 26.0. Besides, this excellent accuracy indicates that compensatory substitutions are significant in protein evolution.
1110.6194
Joseph Rusinko
Joe Rusinko, Brian Hipp
Invariant Based Quartet Puzzling
9 pages 1 figure
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional Quartet Puzzling algorithms use maximum likelihood methods to reconstruct quartet trees, and a puzzling algorithm to combine these quartets into a tree for the full collection of $n$ taxa. We propose a variation of Quartet Puzzling in which the quartet trees are reconstructed using biologically symmetric invariants. We find that under certain conditions, invariant based quartet puzzling outperforms Quartet Puzzling using maximum likelihood.
[ { "created": "Thu, 27 Oct 2011 20:27:58 GMT", "version": "v1" } ]
2011-10-31
[ [ "Rusinko", "Joe", "" ], [ "Hipp", "Brian", "" ] ]
Traditional Quartet Puzzling algorithms use maximum likelihood methods to reconstruct quartet trees, and a puzzling algorithm to combine these quartets into a tree for the full collection of $n$ taxa. We propose a variation of Quartet Puzzling in which the quartet trees are reconstructed using biologically symmetric invariants. We find that under certain conditions, invariant based quartet puzzling outperforms Quartet Puzzling using maximum likelihood.
1309.3640
Alain Barrat
Philippe Vanhems, Alain Barrat, Ciro Cattuto, Jean-Fran\c{c}ois Pinton, Nagham Khanafer, Corinne R\'egis, Byeul-a Kim, Brigitte Comte, Nicolas Voirin
Estimating Potential Infection Transmission Routes in Hospital Wards Using Wearable Proximity Sensors
null
PLoS ONE 8(9): e73970 (2013)
10.1371/journal.pone.0073970
null
q-bio.QM physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contacts between patients, patients and health care workers (HCWs) and among HCWs represent one of the important routes of transmission of hospital-acquired infections (HAI). A detailed description and quantification of contacts in hospitals provides key information for HAIs epidemiology and for the design and validation of control measures. We used wearable sensors to detect close-range interactions ("contacts") between individuals in the geriatric unit of a university hospital. Contact events were measured with a spatial resolution of about 1.5 meters and a temporal resolution of 20 seconds. The study included 46 HCWs and 29 patients and lasted for 4 days and 4 nights. 14037 contacts were recorded. The number and duration of contacts varied between mornings, afternoons and nights, and contact matrices describing the mixing patterns between HCW and patients were built for each time period. Contact patterns were qualitatively similar from one day to the next. 38% of the contacts occurred between pairs of HCWs and 6 HCWs accounted for 42% of all the contacts including at least one patient, suggesting a population of individuals who could potentially act as super-spreaders. Wearable sensors represent a novel tool for the measurement of contact patterns in hospitals. The collected data provides information on important aspects that impact the spreading patterns of infectious diseases, such as the strong heterogeneity of contact numbers and durations across individuals, the variability in the number of contacts during a day, and the fraction of repeated contacts across days. This variability is associated with a marked statistical stability of contact and mixing patterns across days. Our results highlight the need for such measurement efforts in order to correctly inform mathematical models of HAIs and use them to inform the design and evaluation of prevention strategies.
[ { "created": "Sat, 14 Sep 2013 09:46:52 GMT", "version": "v1" } ]
2013-09-17
[ [ "Vanhems", "Philippe", "" ], [ "Barrat", "Alain", "" ], [ "Cattuto", "Ciro", "" ], [ "Pinton", "Jean-François", "" ], [ "Khanafer", "Nagham", "" ], [ "Régis", "Corinne", "" ], [ "Kim", "Byeul-a", "" ], [ "Comte", "Brigitte", "" ], [ "Voirin", "Nicolas", "" ] ]
Contacts between patients, patients and health care workers (HCWs) and among HCWs represent one of the important routes of transmission of hospital-acquired infections (HAI). A detailed description and quantification of contacts in hospitals provides key information for HAIs epidemiology and for the design and validation of control measures. We used wearable sensors to detect close-range interactions ("contacts") between individuals in the geriatric unit of a university hospital. Contact events were measured with a spatial resolution of about 1.5 meters and a temporal resolution of 20 seconds. The study included 46 HCWs and 29 patients and lasted for 4 days and 4 nights. 14037 contacts were recorded. The number and duration of contacts varied between mornings, afternoons and nights, and contact matrices describing the mixing patterns between HCW and patients were built for each time period. Contact patterns were qualitatively similar from one day to the next. 38% of the contacts occurred between pairs of HCWs and 6 HCWs accounted for 42% of all the contacts including at least one patient, suggesting a population of individuals who could potentially act as super-spreaders. Wearable sensors represent a novel tool for the measurement of contact patterns in hospitals. The collected data provides information on important aspects that impact the spreading patterns of infectious diseases, such as the strong heterogeneity of contact numbers and durations across individuals, the variability in the number of contacts during a day, and the fraction of repeated contacts across days. This variability is associated with a marked statistical stability of contact and mixing patterns across days. Our results highlight the need for such measurement efforts in order to correctly inform mathematical models of HAIs and use them to inform the design and evaluation of prevention strategies.
2101.05546
Sylvia N\"urnberg
Alexander Denker, Anastasia Steshina, Theresa Grooss, Frank Ueckert, Sylvia N\"urnberg
Feature reduction for machine learning on molecular features: The GeneScore
11 pages, 9 figures, 4 tables
null
null
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
We present the GeneScore, a concept of feature reduction for Machine Learning analysis of biomedical data. Using expert knowledge, the GeneScore integrates different molecular data types into a single score. We show that the GeneScore is superior to a binary matrix in the classification of cancer entities from SNV, Indel, CNV, gene fusion and gene expression data. The GeneScore is a straightforward way to facilitate state-of-the-art analysis, while making use of the available scientific knowledge on the nature of molecular data features used.
[ { "created": "Thu, 14 Jan 2021 10:58:39 GMT", "version": "v1" } ]
2021-01-15
[ [ "Denker", "Alexander", "" ], [ "Steshina", "Anastasia", "" ], [ "Grooss", "Theresa", "" ], [ "Ueckert", "Frank", "" ], [ "Nürnberg", "Sylvia", "" ] ]
We present the GeneScore, a concept of feature reduction for Machine Learning analysis of biomedical data. Using expert knowledge, the GeneScore integrates different molecular data types into a single score. We show that the GeneScore is superior to a binary matrix in the classification of cancer entities from SNV, Indel, CNV, gene fusion and gene expression data. The GeneScore is a straightforward way to facilitate state-of-the-art analysis, while making use of the available scientific knowledge on the nature of molecular data features used.
0806.2181
Lee Altenberg
Lee Altenberg
The Evolutionary Reduction Principle for Linear Variation in Genetic Transmission
22 pages, 1 figure
Bulletin of Mathematical Biology, Volume 71, Number 5, 1264-1284 (2009)
10.1007/s11538-009-9401-2
null
q-bio.PE math.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The evolution of genetic systems has been analyzed through the use of modifier gene models, in which a neutral gene is posited to control the transmission of other genes under selection. Analysis of modifier gene models has found the manifestations of an "evolutionary reduction principle": in a population near equilibrium, a new modifier allele that scales equally all transition probabilities between different genotypes under selection can invade if and only if it reduces the transition probabilities. Analytical results on the reduction principle have always required some set of constraints for tractability: limitations to one or two selected loci, two alleles per locus, specific selection regimes or weak selection, specific genetic processes being modified, extreme or infinitesimal effects of the modifier allele, or tight linkage between modifier and selected loci. Here, I prove the reduction principle in the absence of any of these constraints, confirming a twenty-year old conjecture. The proof is obtained by a wider application of Karlin's Theorem 5.2 (1982) and its extension to ML-matrices, substochastic matrices, and reducible matrices.
[ { "created": "Fri, 13 Jun 2008 03:56:42 GMT", "version": "v1" } ]
2013-02-04
[ [ "Altenberg", "Lee", "" ] ]
The evolution of genetic systems has been analyzed through the use of modifier gene models, in which a neutral gene is posited to control the transmission of other genes under selection. Analysis of modifier gene models has found the manifestations of an "evolutionary reduction principle": in a population near equilibrium, a new modifier allele that scales equally all transition probabilities between different genotypes under selection can invade if and only if it reduces the transition probabilities. Analytical results on the reduction principle have always required some set of constraints for tractability: limitations to one or two selected loci, two alleles per locus, specific selection regimes or weak selection, specific genetic processes being modified, extreme or infinitesimal effects of the modifier allele, or tight linkage between modifier and selected loci. Here, I prove the reduction principle in the absence of any of these constraints, confirming a twenty-year old conjecture. The proof is obtained by a wider application of Karlin's Theorem 5.2 (1982) and its extension to ML-matrices, substochastic matrices, and reducible matrices.
1812.06234
Yogatheesan Varatharajah
Yogatheesan Varatharajah, Brent Berry, Jan Cimbalnik, Vaclav Kremen, Jamie Van Gompel, Matt Stead, Benjamin Brinkmann, Ravishankar Iyer, and Gregory Worrell
Integrating Artificial Intelligence with Real-time Intracranial EEG Monitoring to Automate Interictal Identification of Seizure Onset Zones in Focal Epilepsy
25 pages, Journal of neural engineering (2018)
null
10.1088/1741-2552/aac960
null
q-bio.NC cs.AI q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An ability to map seizure-generating brain tissue, i.e., the seizure onset zone (SOZ), without recording actual seizures could reduce the duration of invasive EEG monitoring for patients with drug-resistant epilepsy. A widely-adopted practice in the literature is to compare the incidence (events/time) of putative pathological electrophysiological biomarkers associated with epileptic brain tissue with the SOZ determined from spontaneous seizures recorded with intracranial EEG, primarily using a single biomarker. Clinical translation of the previous efforts suffers from their inability to generalize across multiple patients because of (a) the inter-patient variability and (b) the temporal variability in the epileptogenic activity. Here, we report an artificial intelligence-based approach for combining multiple interictal electrophysiological biomarkers and their temporal characteristics as a way of accounting for the above barriers and show that it can reliably identify seizure onset zones in a study cohort of 82 patients who underwent evaluation for drug-resistant epilepsy. Our investigation provides evidence that utilizing the complementary information provided by multiple electrophysiological biomarkers and their temporal characteristics can significantly improve the localization potential compared to previously published single-biomarker incidence-based approaches, resulting in an average area under ROC curve (AUC) value of 0.73 in a cohort of 82 patients. Our results also suggest that recording durations between ninety minutes and two hours are sufficient to localize SOZs with accuracies that may prove clinically relevant. The successful validation of our approach on a large cohort of 82 patients warrants future investigation on the feasibility of utilizing intra-operative EEG monitoring and artificial intelligence to localize epileptogenic brain tissue.
[ { "created": "Sat, 15 Dec 2018 05:15:40 GMT", "version": "v1" } ]
2018-12-18
[ [ "Varatharajah", "Yogatheesan", "" ], [ "Berry", "Brent", "" ], [ "Cimbalnik", "Jan", "" ], [ "Kremen", "Vaclav", "" ], [ "Van Gompel", "Jamie", "" ], [ "Stead", "Matt", "" ], [ "Brinkmann", "Benjamin", "" ], [ "Iyer", "Ravishankar", "" ], [ "Worrell", "Gregory", "" ] ]
An ability to map seizure-generating brain tissue, i.e., the seizure onset zone (SOZ), without recording actual seizures could reduce the duration of invasive EEG monitoring for patients with drug-resistant epilepsy. A widely-adopted practice in the literature is to compare the incidence (events/time) of putative pathological electrophysiological biomarkers associated with epileptic brain tissue with the SOZ determined from spontaneous seizures recorded with intracranial EEG, primarily using a single biomarker. Clinical translation of the previous efforts suffers from their inability to generalize across multiple patients because of (a) the inter-patient variability and (b) the temporal variability in the epileptogenic activity. Here, we report an artificial intelligence-based approach for combining multiple interictal electrophysiological biomarkers and their temporal characteristics as a way of accounting for the above barriers and show that it can reliably identify seizure onset zones in a study cohort of 82 patients who underwent evaluation for drug-resistant epilepsy. Our investigation provides evidence that utilizing the complementary information provided by multiple electrophysiological biomarkers and their temporal characteristics can significantly improve the localization potential compared to previously published single-biomarker incidence-based approaches, resulting in an average area under ROC curve (AUC) value of 0.73 in a cohort of 82 patients. Our results also suggest that recording durations between ninety minutes and two hours are sufficient to localize SOZs with accuracies that may prove clinically relevant. The successful validation of our approach on a large cohort of 82 patients warrants future investigation on the feasibility of utilizing intra-operative EEG monitoring and artificial intelligence to localize epileptogenic brain tissue.
2010.12600
William Lemaire
William Lemaire (1), Maher Benhouria (1), Konin Koua (1), Wei Tong (2), Gabriel Martin-Hardy (1), Melanie Stamp (3), Kumaravelu Ganesan (3), Louis-Philippe Gauthier (1), Marwan Besrour (1), Arman Ahnood (4), David John Garrett (4), S\'ebastien Roy (1), Michael Ibbotson (2,5), Steven Prawer (3), R\'ejean Fontaine (1) ((1) Interdisciplinary Institute for Technological Innovation (3IT), Universit\'e de Sherbrooke, Sherbrooke, Quebec, Canada, (2) National Vision Research Institute, Australian College of Optometry, Carlton, Victoria, Australia, (3) School of Physics, The University of Melbourne, Parkville, Victoria, Australia, (4) School of Engineering, RMIT University, Melbourne, Victoria, Australia, (5) Department of Optometry and Vision Sciences, The University of Melbourne, Parkville, Victoria, Australia)
Feasibility Assessment of an Optically Powered Digital Retinal Prosthesis Architecture for Retinal Ganglion Cell Stimulation
11 pages, 13 figures
null
null
null
q-bio.NC cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Clinical trials previously demonstrated the notable capacity to elicit visual percepts in blind patients affected with retinal diseases by electrically stimulating the remaining neurons on the retina. However, these implants restored very limited visual acuity and required transcutaneous cables traversing the eyeball, leading to reduced reliability and complex surgery with high postoperative infection risks. To overcome the limitations imposed by cables, a retinal implant architecture in which near-infrared illumination carries both power and data through the pupil to a digital stimulation controller is presented. A high efficiency multi-junction photovoltaic cell transduces the optical power to a CMOS stimulator capable of delivering flexible interleaved sequential stimulation through a diamond microelectrode array. To demonstrate the capacity to elicit a neural response with this approach while complying with the optical irradiance limit at the pupil, fluorescence imaging with a calcium indicator is used on a degenerate rat retina. The power delivered by the laser at the permissible irradiance of 4 mW/mm2 at 850 nm is shown to be sufficient to both power the stimulator ASIC and elicit a response in retinal ganglion cells (RGCs), with the ability to generate of up to 35 000 pulses per second at the average stimulation threshold. This confirms the feasibility of generating a response in RGCs with an infrared-powered digital architecture capable of delivering complex sequential stimulation patterns at high repetition rates, albeit with some limitations.
[ { "created": "Fri, 23 Oct 2020 18:11:56 GMT", "version": "v1" }, { "created": "Fri, 13 Oct 2023 14:23:52 GMT", "version": "v2" } ]
2023-10-16
[ [ "Lemaire", "William", "" ], [ "Benhouria", "Maher", "" ], [ "Koua", "Konin", "" ], [ "Tong", "Wei", "" ], [ "Martin-Hardy", "Gabriel", "" ], [ "Stamp", "Melanie", "" ], [ "Ganesan", "Kumaravelu", "" ], [ "Gauthier", "Louis-Philippe", "" ], [ "Besrour", "Marwan", "" ], [ "Ahnood", "Arman", "" ], [ "Garrett", "David John", "" ], [ "Roy", "Sébastien", "" ], [ "Ibbotson", "Michael", "" ], [ "Prawer", "Steven", "" ], [ "Fontaine", "Réjean", "" ] ]
Clinical trials previously demonstrated the notable capacity to elicit visual percepts in blind patients affected with retinal diseases by electrically stimulating the remaining neurons on the retina. However, these implants restored very limited visual acuity and required transcutaneous cables traversing the eyeball, leading to reduced reliability and complex surgery with high postoperative infection risks. To overcome the limitations imposed by cables, a retinal implant architecture in which near-infrared illumination carries both power and data through the pupil to a digital stimulation controller is presented. A high efficiency multi-junction photovoltaic cell transduces the optical power to a CMOS stimulator capable of delivering flexible interleaved sequential stimulation through a diamond microelectrode array. To demonstrate the capacity to elicit a neural response with this approach while complying with the optical irradiance limit at the pupil, fluorescence imaging with a calcium indicator is used on a degenerate rat retina. The power delivered by the laser at the permissible irradiance of 4 mW/mm2 at 850 nm is shown to be sufficient to both power the stimulator ASIC and elicit a response in retinal ganglion cells (RGCs), with the ability to generate of up to 35 000 pulses per second at the average stimulation threshold. This confirms the feasibility of generating a response in RGCs with an infrared-powered digital architecture capable of delivering complex sequential stimulation patterns at high repetition rates, albeit with some limitations.
1902.11236
Melisa B Maidana Capitan
Melisa Maidana Capit\'an, Nuria C\'ampora, Claudio Sebasti\'an, Sigvard Silvia Kochen, In\'es Samengo
Time- and frequency-resolved covariance analysis for detection and characterization of seizures from intracraneal EEG recordings
21 pages, 4 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The amount of power in different frequency bands of the electroencephalogram (EEG) carries information about the behavioral state of a subject. Hence, neurologists treating epileptic patients monitor the temporal evolution of the different bands. We propose a covariance-based method to detect and characterize epileptic seizures operating on the band-filtered EEG signal. The algorithm is unsupervised, and performs a principal component analysis of intra-cranial EEG recordings, detecting transient fluctuations of the power in each frequency band. Its simplicity makes it suitable for online implementation. Good sampling of the non-ictal periods is required, while no demands are imposed on the amount of data during ictal activity. We tested the method with 32 seizures registered in 5 patients. The area below the resulting receiver-operating characteristic curves was 87\% for the detection of seizures and 91\% for the detection of recruited electrodes. To identify the behaviorally relevant correlates of the physiological signal, we identified transient changes in the variance of each band that were correlated with the degree of loss of consciousness, the latter assessed by the so-called Consciousness Seizure Scale, summarizing the performance of the subject in a number of behavioral tests requested during seizures. We concluded that those crisis with maximal impairment of consciousness tended to exhibit an increase of variance approximately 40 seconds after seizure onset, with predominant power in the theta and alpha bands, and reduced delta and beta activity.
[ { "created": "Thu, 28 Feb 2019 17:35:43 GMT", "version": "v1" }, { "created": "Mon, 15 Jun 2020 18:43:45 GMT", "version": "v2" } ]
2020-06-17
[ [ "Capitán", "Melisa Maidana", "" ], [ "Cámpora", "Nuria", "" ], [ "Sebastián", "Claudio", "" ], [ "Kochen", "Sigvard Silvia", "" ], [ "Samengo", "Inés", "" ] ]
The amount of power in different frequency bands of the electroencephalogram (EEG) carries information about the behavioral state of a subject. Hence, neurologists treating epileptic patients monitor the temporal evolution of the different bands. We propose a covariance-based method to detect and characterize epileptic seizures operating on the band-filtered EEG signal. The algorithm is unsupervised, and performs a principal component analysis of intra-cranial EEG recordings, detecting transient fluctuations of the power in each frequency band. Its simplicity makes it suitable for online implementation. Good sampling of the non-ictal periods is required, while no demands are imposed on the amount of data during ictal activity. We tested the method with 32 seizures registered in 5 patients. The area below the resulting receiver-operating characteristic curves was 87\% for the detection of seizures and 91\% for the detection of recruited electrodes. To identify the behaviorally relevant correlates of the physiological signal, we identified transient changes in the variance of each band that were correlated with the degree of loss of consciousness, the latter assessed by the so-called Consciousness Seizure Scale, summarizing the performance of the subject in a number of behavioral tests requested during seizures. We concluded that those crisis with maximal impairment of consciousness tended to exhibit an increase of variance approximately 40 seconds after seizure onset, with predominant power in the theta and alpha bands, and reduced delta and beta activity.
1712.05035
Zvi Rosen
Zvi Rosen, Anand Bhaskar, Sebastien Roch, Yun S. Song
Geometry of the sample frequency spectrum and the perils of demographic inference
21 pages, 5 figures
null
null
null
q-bio.PE math.AG math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The sample frequency spectrum (SFS), which describes the distribution of mutant alleles in a sample of DNA sequences, is a widely used summary statistic in population genetics. The expected SFS has a strong dependence on the historical population demography and this property is exploited by popular statistical methods to infer complex demographic histories from DNA sequence data. Most, if not all, of these inference methods exhibit pathological behavior, however. Specifically, they often display runaway behavior in optimization, where the inferred population sizes and epoch durations can degenerate to 0 or diverge to infinity, and show undesirable sensitivity of the inferred demography to perturbations in the data. The goal of this paper is to provide theoretical insights into why such problems arise. To this end, we characterize the geometry of the expected SFS for piecewise-constant demographic histories and use our results to show that the aforementioned pathological behavior of popular inference methods is intrinsic to the geometry of the expected SFS. We provide explicit descriptions and visualizations for a toy model with sample size 4, and generalize our intuition to arbitrary sample sizes n using tools from convex and algebraic geometry. We also develop a universal characterization result which shows that the expected SFS of a sample of size n under an arbitrary population history can be recapitulated by a piecewise-constant demography with only k(n) epochs, where k(n) is between n/2 and 2n-1. The set of expected SFS for piecewise-constant demographies with fewer than k(n) epochs is open and non-convex, which causes the above phenomena for inference from data.
[ { "created": "Wed, 13 Dec 2017 22:52:21 GMT", "version": "v1" } ]
2017-12-19
[ [ "Rosen", "Zvi", "" ], [ "Bhaskar", "Anand", "" ], [ "Roch", "Sebastien", "" ], [ "Song", "Yun S.", "" ] ]
The sample frequency spectrum (SFS), which describes the distribution of mutant alleles in a sample of DNA sequences, is a widely used summary statistic in population genetics. The expected SFS has a strong dependence on the historical population demography and this property is exploited by popular statistical methods to infer complex demographic histories from DNA sequence data. Most, if not all, of these inference methods exhibit pathological behavior, however. Specifically, they often display runaway behavior in optimization, where the inferred population sizes and epoch durations can degenerate to 0 or diverge to infinity, and show undesirable sensitivity of the inferred demography to perturbations in the data. The goal of this paper is to provide theoretical insights into why such problems arise. To this end, we characterize the geometry of the expected SFS for piecewise-constant demographic histories and use our results to show that the aforementioned pathological behavior of popular inference methods is intrinsic to the geometry of the expected SFS. We provide explicit descriptions and visualizations for a toy model with sample size 4, and generalize our intuition to arbitrary sample sizes n using tools from convex and algebraic geometry. We also develop a universal characterization result which shows that the expected SFS of a sample of size n under an arbitrary population history can be recapitulated by a piecewise-constant demography with only k(n) epochs, where k(n) is between n/2 and 2n-1. The set of expected SFS for piecewise-constant demographies with fewer than k(n) epochs is open and non-convex, which causes the above phenomena for inference from data.
0911.0814
Wojciech Waga
Wojciech Waga, Marta Zawierta, Stanislaw Cebrat
Modelling the Evolution of Spatially Distributed Populations in the Uniformly Changing Environment - Sympatric Speciation
15 pages, 7 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We have simulated the evolution of age structured populations whose individuals represented by their diploid genomes were distributed on a square lattice. The environmental conditions on the whole territory changed simultaneously in the same way by switching on or off some requirements. Mutations accumulated in the genes dispensable during a given period of time were neutral, but they could cause a genetic death of individuals if the environment required their functions again. Populations survived due to retaining some surplus of genetic information in the individual genomes. The changes of the environment caused the fluctuations of the population size. Since the simulations were performed with individuals spatially distributed on the lattice and the maximal distance between mating partners was set as a parameter of the model, the inbreeding coefficient in populations changed unevenly, following the fluctuation of population size and enhancing the speciation phenomena.
[ { "created": "Wed, 4 Nov 2009 13:13:21 GMT", "version": "v1" } ]
2009-11-05
[ [ "Waga", "Wojciech", "" ], [ "Zawierta", "Marta", "" ], [ "Cebrat", "Stanislaw", "" ] ]
We have simulated the evolution of age structured populations whose individuals represented by their diploid genomes were distributed on a square lattice. The environmental conditions on the whole territory changed simultaneously in the same way by switching on or off some requirements. Mutations accumulated in the genes dispensable during a given period of time were neutral, but they could cause a genetic death of individuals if the environment required their functions again. Populations survived due to retaining some surplus of genetic information in the individual genomes. The changes of the environment caused the fluctuations of the population size. Since the simulations were performed with individuals spatially distributed on the lattice and the maximal distance between mating partners was set as a parameter of the model, the inbreeding coefficient in populations changed unevenly, following the fluctuation of population size and enhancing the speciation phenomena.
1404.7529
B. Roy Frieden
B.R. Frieden and R.A. Gatenby
Cell development obeys maximum Fisher information
24 pages, 2 figures
null
null
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Eukaryotic cell development has been optimized by natural selection to obey maximal intracellular flux of messenger proteins. This, in turn, implies maximum Fisher information on angular position about a target nuclear pore complex (NPR). The cell is simply modeled as spherical, with cell membrane (CM) diameter 10 micron and concentric nuclear membrane (NM) diameter 6 micron. The NM contains about 3000 nuclear pore complexes (NPCs). Development requires messenger ligands to travel from the CM-NPC-DNA target binding sites. Ligands acquire negative charge by phosphorylation, passing through the cytoplasm over Newtonian trajectories toward positively charged NPCs (utilizing positive nuclear localization sequences). The CM-NPC channel obeys maximized mean protein flux F and Fisher information I at the NPC, with first-order delta I = 0 and approximate 2nd-order delta I = 0 stability to environmental perturbations. Many of its predictions are confirmed, including the dominance of protein pathways of from 1-4 proteins, a 4nm size for the EGFR protein and the approximate flux value F =10^16 proteins/m2-s. After entering the nucleus, each protein ultimately delivers its ligand information to a DNA target site with maximum probability, i.e. maximum Kullback-Liebler entropy HKL. In a smoothness limit HKL approaches IDNA/2, so that the total CM-NPC-DNA channel obeys maximum Fisher I. Thus maximum information approaches non-equilibrium, one condition for life.
[ { "created": "Tue, 29 Apr 2014 20:55:56 GMT", "version": "v1" } ]
2014-05-01
[ [ "Frieden", "B. R.", "" ], [ "Gatenby", "R. A.", "" ] ]
Eukaryotic cell development has been optimized by natural selection to obey maximal intracellular flux of messenger proteins. This, in turn, implies maximum Fisher information on angular position about a target nuclear pore complex (NPR). The cell is simply modeled as spherical, with cell membrane (CM) diameter 10 micron and concentric nuclear membrane (NM) diameter 6 micron. The NM contains about 3000 nuclear pore complexes (NPCs). Development requires messenger ligands to travel from the CM-NPC-DNA target binding sites. Ligands acquire negative charge by phosphorylation, passing through the cytoplasm over Newtonian trajectories toward positively charged NPCs (utilizing positive nuclear localization sequences). The CM-NPC channel obeys maximized mean protein flux F and Fisher information I at the NPC, with first-order delta I = 0 and approximate 2nd-order delta I = 0 stability to environmental perturbations. Many of its predictions are confirmed, including the dominance of protein pathways of from 1-4 proteins, a 4nm size for the EGFR protein and the approximate flux value F =10^16 proteins/m2-s. After entering the nucleus, each protein ultimately delivers its ligand information to a DNA target site with maximum probability, i.e. maximum Kullback-Liebler entropy HKL. In a smoothness limit HKL approaches IDNA/2, so that the total CM-NPC-DNA channel obeys maximum Fisher I. Thus maximum information approaches non-equilibrium, one condition for life.
2301.02659
Pouya Baniasadi
Pouya Baniasadi
Bayesian modelling of visual discrimination learning in mice
Unpublished Masters Project Report for research conducted at the University of Cambridge (2020)
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
The brain constantly turns large flows of sensory information into selective representations of the environment. It, therefore, needs to learn to process those sensory inputs that are most relevant for behaviour. It is not well understood how learning changes neural circuits in visual and decision-making brain areas to adjust and improve its visually guided decision-making. To address this question, head-fixed mice were trained to move through virtual reality environments and learn visual discrimination while neural activity was recorded with two-photon calcium imaging. Previously, descriptive models of neuronal activity were fitted to the data, which was used to compare the activity of excitatory and different inhibitory cell types. However, the previous models did not take the internal representations and learning dynamics into account. Here, I present a framework to infer a model of internal representations that are used to generate the behaviour during the task. We model the learning process from untrained mice to trained mice within the normative framework of the ideal Bayesian observer and provide a Markov model for generating the movement and licking. The framework provides a space of models where a range of hypotheses about the internal representations could be compared for a given data set.
[ { "created": "Tue, 15 Nov 2022 16:59:20 GMT", "version": "v1" } ]
2023-01-10
[ [ "Baniasadi", "Pouya", "" ] ]
The brain constantly turns large flows of sensory information into selective representations of the environment. It, therefore, needs to learn to process those sensory inputs that are most relevant for behaviour. It is not well understood how learning changes neural circuits in visual and decision-making brain areas to adjust and improve its visually guided decision-making. To address this question, head-fixed mice were trained to move through virtual reality environments and learn visual discrimination while neural activity was recorded with two-photon calcium imaging. Previously, descriptive models of neuronal activity were fitted to the data, which was used to compare the activity of excitatory and different inhibitory cell types. However, the previous models did not take the internal representations and learning dynamics into account. Here, I present a framework to infer a model of internal representations that are used to generate the behaviour during the task. We model the learning process from untrained mice to trained mice within the normative framework of the ideal Bayesian observer and provide a Markov model for generating the movement and licking. The framework provides a space of models where a range of hypotheses about the internal representations could be compared for a given data set.
1812.04435
Rosalind J Allen
Rosalind J Allen and Bartlomiej Waclaw
Bacterial growth: a statistical physicist's guide
null
null
10.1088/1361-6633/aae546
null
q-bio.CB q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bacterial growth presents many beautiful phenomena that pose new theoretical challenges to statistical physicists, and are also amenable to laboratory experimentation. This review provides some of the essential biological background, discusses recent applications of statistical physics in this field, and highlights the potential for future research.
[ { "created": "Tue, 11 Dec 2018 14:38:51 GMT", "version": "v1" } ]
2018-12-19
[ [ "Allen", "Rosalind J", "" ], [ "Waclaw", "Bartlomiej", "" ] ]
Bacterial growth presents many beautiful phenomena that pose new theoretical challenges to statistical physicists, and are also amenable to laboratory experimentation. This review provides some of the essential biological background, discusses recent applications of statistical physics in this field, and highlights the potential for future research.
1003.3391
Adilson Enio Motter
Adilson E. Motter
Improved Network Performance via Antagonism: From Synthetic Rescues to Multi-drug Combinations
Online Open "Problems and Paradigms" article
A.E. Motter, BioEssays 32, 236 (2010)
10.1002/bies.200900128
null
q-bio.MN nlin.AO physics.soc-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent research shows that a faulty or sub-optimally operating metabolic network can often be rescued by the targeted removal of enzyme-coding genes--the exact opposite of what traditional gene therapy would suggest. Predictions go as far as to assert that certain gene knockouts can restore the growth of otherwise nonviable gene-deficient cells. Many questions follow from this discovery: What are the underlying mechanisms? How generalizable is this effect? What are the potential applications? Here, I will approach these questions from the perspective of compensatory perturbations on networks. Relations will be drawn between such synthetic rescues and naturally occurring cascades of reaction inactivation, as well as their analogues in physical and other biological networks. I will specially discuss how rescue interactions can lead to the rational design of antagonistic drug combinations that select against resistance and how they can illuminate medical research on cancer, antibiotics, and metabolic diseases.
[ { "created": "Wed, 17 Mar 2010 15:13:45 GMT", "version": "v1" } ]
2010-03-18
[ [ "Motter", "Adilson E.", "" ] ]
Recent research shows that a faulty or sub-optimally operating metabolic network can often be rescued by the targeted removal of enzyme-coding genes--the exact opposite of what traditional gene therapy would suggest. Predictions go as far as to assert that certain gene knockouts can restore the growth of otherwise nonviable gene-deficient cells. Many questions follow from this discovery: What are the underlying mechanisms? How generalizable is this effect? What are the potential applications? Here, I will approach these questions from the perspective of compensatory perturbations on networks. Relations will be drawn between such synthetic rescues and naturally occurring cascades of reaction inactivation, as well as their analogues in physical and other biological networks. I will specially discuss how rescue interactions can lead to the rational design of antagonistic drug combinations that select against resistance and how they can illuminate medical research on cancer, antibiotics, and metabolic diseases.
1208.4660
Yohei Kondo
Yohei Kondo, Kunihiko Kaneko, and Shuji Ishihara
Identifying dynamical systems with bifurcations from noisy partial observation
16 pages, 6 figures
null
10.1103/PhysRevE.87.042716
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dynamical systems are used to model a variety of phenomena in which the bifurcation structure is a fundamental characteristic. Here we propose a statistical machine-learning approach to derive lowdimensional models that automatically integrate information in noisy time-series data from partial observations. The method is tested using artificial data generated from two cell-cycle control system models that exhibit different bifurcations, and the learned systems are shown to robustly inherit the bifurcation structure.
[ { "created": "Thu, 23 Aug 2012 02:41:36 GMT", "version": "v1" } ]
2015-06-11
[ [ "Kondo", "Yohei", "" ], [ "Kaneko", "Kunihiko", "" ], [ "Ishihara", "Shuji", "" ] ]
Dynamical systems are used to model a variety of phenomena in which the bifurcation structure is a fundamental characteristic. Here we propose a statistical machine-learning approach to derive lowdimensional models that automatically integrate information in noisy time-series data from partial observations. The method is tested using artificial data generated from two cell-cycle control system models that exhibit different bifurcations, and the learned systems are shown to robustly inherit the bifurcation structure.
2006.10651
Chris Antonopoulos Dr
Ian Cooper, Argha Mondal, Chris G. Antonopoulos
A SIR model assumption for the spread of COVID-19 in different communities
18 pages, 17 figures
null
10.1016/j.chaos.2020.110057
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study the effectiveness of the modelling approach on the pandemic due to the spreading of the novel COVID-19 disease and develop a susceptible-infected-removed (SIR) model that provides a theoretical framework to investigate its spread within a community. Here, the model is based upon the well-known susceptible-infected-removed (SIR) model with the difference that a total population is not defined or kept constant per se and the number of susceptible individuals does not decline monotonically. To the contrary, as we show herein, it can be increased in surge periods! In particular, we investigate the time evolution of different populations and monitor diverse significant parameters for the spread of the disease in various communities, represented by countries and the state of Texas in the USA. The SIR model can provide us with insights and predictions of the spread of the virus in communities that the recorded data alone cannot. Our work shows the importance of modelling the spread of COVID-19 by the SIR model that we propose here, as it can help to assess the impact of the disease by offering valuable predictions. Our analysis takes into account data from January to June, 2020, the period that contains the data before and during the implementation of strict and control measures. We propose predictions on various parameters related to the spread of COVID-19 and on the number of susceptible, infected and removed populations until September 2020. By comparing the recorded data with the data from our modelling approaches, we deduce that the spread of COVID-19 can be under control in all communities considered, if proper restrictions and strong policies are implemented to control the infection rates early from the spread of the disease.
[ { "created": "Thu, 18 Jun 2020 16:22:23 GMT", "version": "v1" } ]
2020-08-26
[ [ "Cooper", "Ian", "" ], [ "Mondal", "Argha", "" ], [ "Antonopoulos", "Chris G.", "" ] ]
In this paper, we study the effectiveness of the modelling approach on the pandemic due to the spreading of the novel COVID-19 disease and develop a susceptible-infected-removed (SIR) model that provides a theoretical framework to investigate its spread within a community. Here, the model is based upon the well-known susceptible-infected-removed (SIR) model with the difference that a total population is not defined or kept constant per se and the number of susceptible individuals does not decline monotonically. To the contrary, as we show herein, it can be increased in surge periods! In particular, we investigate the time evolution of different populations and monitor diverse significant parameters for the spread of the disease in various communities, represented by countries and the state of Texas in the USA. The SIR model can provide us with insights and predictions of the spread of the virus in communities that the recorded data alone cannot. Our work shows the importance of modelling the spread of COVID-19 by the SIR model that we propose here, as it can help to assess the impact of the disease by offering valuable predictions. Our analysis takes into account data from January to June, 2020, the period that contains the data before and during the implementation of strict and control measures. We propose predictions on various parameters related to the spread of COVID-19 and on the number of susceptible, infected and removed populations until September 2020. By comparing the recorded data with the data from our modelling approaches, we deduce that the spread of COVID-19 can be under control in all communities considered, if proper restrictions and strong policies are implemented to control the infection rates early from the spread of the disease.
2209.02063
Daniel Cooney
Daniel B. Cooney, Simon A. Levin, Yoichiro Mori, Joshua B. Plotkin
Evolutionary Dynamics Within and Among Competing Groups
48 pages, 8 figures
null
10.1073/pnas.2216186120
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biological and social systems are structured at multiple scales, and the incentives of individuals who interact in a group may diverge from the collective incentive of the group as a whole. Mechanisms to resolve this tension are responsible for profound transitions in evolutionary history, including the origin of cellular life, multi-cellular life, and even societies. Here we synthesize a growing literature that extends evolutionary game theory to describe multilevel evolutionary dynamics, using nested birth-death processes and partial differential equations to model natural selection acting on competition within and among groups of individuals. We apply this theory to analyze how mechanisms known to promote cooperation within a single group -- including assortment, reciprocity, and population structure -- alter evolutionary outcomes in the presence of competition among groups. We find that population structures most conducive to cooperation in multi-scale systems may differ from those most conducive within a single group. Likewise, for competitive interactions with a continuous range of strategies we find that among-group selection may fail to produce socially optimal outcomes, but it can nonetheless produce second-best solutions that balance individual incentives to defect with the collective incentives for cooperation. We conclude by describing the broad applicability of multi-scale evolutionary models to problems ranging from the production of diffusible metabolites in microbes to the management of common-pool resources in human societies.
[ { "created": "Mon, 5 Sep 2022 17:16:55 GMT", "version": "v1" } ]
2023-05-31
[ [ "Cooney", "Daniel B.", "" ], [ "Levin", "Simon A.", "" ], [ "Mori", "Yoichiro", "" ], [ "Plotkin", "Joshua B.", "" ] ]
Biological and social systems are structured at multiple scales, and the incentives of individuals who interact in a group may diverge from the collective incentive of the group as a whole. Mechanisms to resolve this tension are responsible for profound transitions in evolutionary history, including the origin of cellular life, multi-cellular life, and even societies. Here we synthesize a growing literature that extends evolutionary game theory to describe multilevel evolutionary dynamics, using nested birth-death processes and partial differential equations to model natural selection acting on competition within and among groups of individuals. We apply this theory to analyze how mechanisms known to promote cooperation within a single group -- including assortment, reciprocity, and population structure -- alter evolutionary outcomes in the presence of competition among groups. We find that population structures most conducive to cooperation in multi-scale systems may differ from those most conducive within a single group. Likewise, for competitive interactions with a continuous range of strategies we find that among-group selection may fail to produce socially optimal outcomes, but it can nonetheless produce second-best solutions that balance individual incentives to defect with the collective incentives for cooperation. We conclude by describing the broad applicability of multi-scale evolutionary models to problems ranging from the production of diffusible metabolites in microbes to the management of common-pool resources in human societies.
2402.16409
Francesco Sannino
Stefan Hohenegger and Francesco Sannino
Renormalisation Group Methods for Effective Epidemiological Models
36 pages, 19 figures
null
null
null
q-bio.PE hep-th stat.AP
http://creativecommons.org/licenses/by/4.0/
Epidemiological models describe the spread of an infectious disease within a population. They capture microscopic details on how the disease is passed on among individuals in various different ways, while making predictions about the state of the entirety of the population. However, the type and structure of the specific model considered typically depend on the size of the population under consideration. To analyse this effect, we study a family of effective epidemiological models in space and time that are related to each other through scaling transformations. Inspired by a similar treatment of diffusion processes, we interpret the latter as renormalisation group transformations, both at the level of the underlying differential equations and their solutions. We show that in the large scale limit, the microscopic details of the infection process become irrelevant, safe for a simple real number, which plays the role of the infection rate in a basic compartmental model.
[ { "created": "Mon, 26 Feb 2024 09:05:13 GMT", "version": "v1" } ]
2024-02-27
[ [ "Hohenegger", "Stefan", "" ], [ "Sannino", "Francesco", "" ] ]
Epidemiological models describe the spread of an infectious disease within a population. They capture microscopic details on how the disease is passed on among individuals in various different ways, while making predictions about the state of the entirety of the population. However, the type and structure of the specific model considered typically depend on the size of the population under consideration. To analyse this effect, we study a family of effective epidemiological models in space and time that are related to each other through scaling transformations. Inspired by a similar treatment of diffusion processes, we interpret the latter as renormalisation group transformations, both at the level of the underlying differential equations and their solutions. We show that in the large scale limit, the microscopic details of the infection process become irrelevant, safe for a simple real number, which plays the role of the infection rate in a basic compartmental model.
1805.07061
Yu Terada
Yu Terada, Tomoyuki Obuchi, Takuya Isomura, Yoshiyuki Kabashima
Objective and efficient inference for couplings in neuronal networks
null
null
10.1088/1742-5468/ab3219
null
q-bio.NC cond-mat.dis-nn cond-mat.stat-mech nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inferring directional couplings from the spike data of networks is desired in various scientific fields such as neuroscience. Here, we apply a recently proposed objective procedure to the spike data obtained from the Hodgkin--Huxley type models and in vitro neuronal networks cultured in a circular structure. As a result, we succeed in reconstructing synaptic connections accurately from the evoked activity as well as the spontaneous one. To obtain the results, we invent an analytic formula approximately implementing a method of screening relevant couplings. This significantly reduces the computational cost of the screening method employed in the proposed objective procedure, making it possible to treat large-size systems as in this study.
[ { "created": "Fri, 18 May 2018 06:19:37 GMT", "version": "v1" } ]
2020-01-29
[ [ "Terada", "Yu", "" ], [ "Obuchi", "Tomoyuki", "" ], [ "Isomura", "Takuya", "" ], [ "Kabashima", "Yoshiyuki", "" ] ]
Inferring directional couplings from the spike data of networks is desired in various scientific fields such as neuroscience. Here, we apply a recently proposed objective procedure to the spike data obtained from the Hodgkin--Huxley type models and in vitro neuronal networks cultured in a circular structure. As a result, we succeed in reconstructing synaptic connections accurately from the evoked activity as well as the spontaneous one. To obtain the results, we invent an analytic formula approximately implementing a method of screening relevant couplings. This significantly reduces the computational cost of the screening method employed in the proposed objective procedure, making it possible to treat large-size systems as in this study.
2306.14329
Somya Mehra
Somya Mehra, James M. McCaw, Peter G. Taylor
Superinfection and the hypnozoite reservoir for Plasmodium vivax: a general framework
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Malaria is a parasitic disease, transmitted by mosquito vectors. Plasmodium vivax presents particular challenges for disease control, in light of an undetectable reservoir of latent parasites (hypnozoites) within the host liver. Superinfection, which is driven by temporally proximate mosquito inoculation and/or hypnozoite activation events, is an important feature of P. vivax. Here, we present a model of hypnozoite accrual and superinfection for P. vivax. To couple host and vector dynamics, we construct a density-dependent Markov population process with countably many types, for which disease extinction is shown to occur almost surely. We also establish a functional law of large numbers, taking the form of an infinite-dimensional system of ordinary differential equations that can also be recovered under the hybrid approximation or a standard compartment modelling approach. Recognising that the subset of these equations that models the infection status of human hosts has precisely the same form as the Kolmogorov forward equations for a Markovian network of infinite server queues with an inhomogeneous batch arrival process, we use physical insight into the evolution of the latter to write down a time-dependent multivariate generating function for the solution. We use this characterisation to collapse the infinite-compartment model into a single integrodifferential equation (IDE) governing the intensity of mosquito-to-human transmission. Through a steady state analysis, we recover a threshold phenomenon for this IDE in terms of a bifurcation parameter $R_0$, with the disease-free equilibrium shown to be uniformly asymptotically stable if $R_0<1$ and an endemic equilibrium solution emerging if $R_0>1$. Our work provides a theoretical basis to explore the epidemiology of P. vivax, and introduces a general strategy for constructing tractable population-level models of malarial superinfection.
[ { "created": "Sun, 25 Jun 2023 19:59:09 GMT", "version": "v1" } ]
2023-06-27
[ [ "Mehra", "Somya", "" ], [ "McCaw", "James M.", "" ], [ "Taylor", "Peter G.", "" ] ]
Malaria is a parasitic disease, transmitted by mosquito vectors. Plasmodium vivax presents particular challenges for disease control, in light of an undetectable reservoir of latent parasites (hypnozoites) within the host liver. Superinfection, which is driven by temporally proximate mosquito inoculation and/or hypnozoite activation events, is an important feature of P. vivax. Here, we present a model of hypnozoite accrual and superinfection for P. vivax. To couple host and vector dynamics, we construct a density-dependent Markov population process with countably many types, for which disease extinction is shown to occur almost surely. We also establish a functional law of large numbers, taking the form of an infinite-dimensional system of ordinary differential equations that can also be recovered under the hybrid approximation or a standard compartment modelling approach. Recognising that the subset of these equations that models the infection status of human hosts has precisely the same form as the Kolmogorov forward equations for a Markovian network of infinite server queues with an inhomogeneous batch arrival process, we use physical insight into the evolution of the latter to write down a time-dependent multivariate generating function for the solution. We use this characterisation to collapse the infinite-compartment model into a single integrodifferential equation (IDE) governing the intensity of mosquito-to-human transmission. Through a steady state analysis, we recover a threshold phenomenon for this IDE in terms of a bifurcation parameter $R_0$, with the disease-free equilibrium shown to be uniformly asymptotically stable if $R_0<1$ and an endemic equilibrium solution emerging if $R_0>1$. Our work provides a theoretical basis to explore the epidemiology of P. vivax, and introduces a general strategy for constructing tractable population-level models of malarial superinfection.
2011.14241
David Winkler
Sakshi Piplani, Puneet Singh, David A. Winkler, Nikolai Petrovsky
Computationally repurposed drugs and natural products against RNA dependent RNA polymerase as potential COVID-19 therapies
38 pages plus supplementary, 11 figures
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
For fast development of COVID-19, it is only feasible to use drugs (off label use) or approved natural products that are already registered or been assessed for safety in previous human trials. These agents can be quickly assessed in COVID-19 patients, as their safety and pharmacokinetics should already be well understood. Computational methods offer promise for rapidly screening such products for potential SARS-CoV-2 activity by predicting and ranking the affinities of these compounds for specific virus protein targets. The RNA-dependent RNA polymerase (RdRP) is a promising target for SARS-CoV-2 drug development given it has no human homologs making RdRP inhibitors potentially safer, with fewer off-target effects that drugs targeting other viral proteins. We combined robust Vina docking on RdRP with molecular dynamic (MD) simulation of the top 80 identified drug candidates to yield a list of the most promising RdRP inhibitors. Literature reviews revealed that many of the predicted inhibitors had been shown to have activity in in vitro assays or had been predicted by other groups to have activity. The novel hits revealed by our screen can now be conveniently tested for activity in RdRP inhibition assays and if conformed testing for antiviral activity invitro before being tested in human trials
[ { "created": "Sun, 29 Nov 2020 00:17:26 GMT", "version": "v1" } ]
2020-12-01
[ [ "Piplani", "Sakshi", "" ], [ "Singh", "Puneet", "" ], [ "Winkler", "David A.", "" ], [ "Petrovsky", "Nikolai", "" ] ]
For fast development of COVID-19, it is only feasible to use drugs (off label use) or approved natural products that are already registered or been assessed for safety in previous human trials. These agents can be quickly assessed in COVID-19 patients, as their safety and pharmacokinetics should already be well understood. Computational methods offer promise for rapidly screening such products for potential SARS-CoV-2 activity by predicting and ranking the affinities of these compounds for specific virus protein targets. The RNA-dependent RNA polymerase (RdRP) is a promising target for SARS-CoV-2 drug development given it has no human homologs making RdRP inhibitors potentially safer, with fewer off-target effects that drugs targeting other viral proteins. We combined robust Vina docking on RdRP with molecular dynamic (MD) simulation of the top 80 identified drug candidates to yield a list of the most promising RdRP inhibitors. Literature reviews revealed that many of the predicted inhibitors had been shown to have activity in in vitro assays or had been predicted by other groups to have activity. The novel hits revealed by our screen can now be conveniently tested for activity in RdRP inhibition assays and if conformed testing for antiviral activity invitro before being tested in human trials
1711.04828
Gregory Way
Gregory P. Way and Casey S. Greene
Evaluating deep variational autoencoders trained on pan-cancer gene expression
4 pages, 3 figures, 2 tables, NIPS 2017
null
null
null
q-bio.GN q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Cancer is a heterogeneous disease with diverse molecular etiologies and outcomes. The Cancer Genome Atlas (TCGA) has released a large compendium of over 10,000 tumors with RNA-seq gene expression measurements. Gene expression captures the diverse molecular profiles of tumors and can be interrogated to reveal differential pathway activations. Deep unsupervised models, including Variational Autoencoders (VAE) can be used to reveal these underlying patterns. We compare a one-hidden layer VAE to two alternative VAE architectures with increased depth. We determine the additional capacity marginally improves performance. We train and compare the three VAE architectures to other dimensionality reduction techniques including principal components analysis (PCA), independent components analysis (ICA), non-negative matrix factorization (NMF), and analysis of gene expression by denoising autoencoders (ADAGE). We compare performance in a supervised learning task predicting gene inactivation pan-cancer and in a latent space analysis of high grade serous ovarian cancer (HGSC) subtypes. We do not observe substantial differences across algorithms in the classification task. VAE latent spaces offer biological insights into HGSC subtype biology.
[ { "created": "Mon, 13 Nov 2017 20:11:26 GMT", "version": "v1" } ]
2017-11-15
[ [ "Way", "Gregory P.", "" ], [ "Greene", "Casey S.", "" ] ]
Cancer is a heterogeneous disease with diverse molecular etiologies and outcomes. The Cancer Genome Atlas (TCGA) has released a large compendium of over 10,000 tumors with RNA-seq gene expression measurements. Gene expression captures the diverse molecular profiles of tumors and can be interrogated to reveal differential pathway activations. Deep unsupervised models, including Variational Autoencoders (VAE) can be used to reveal these underlying patterns. We compare a one-hidden layer VAE to two alternative VAE architectures with increased depth. We determine the additional capacity marginally improves performance. We train and compare the three VAE architectures to other dimensionality reduction techniques including principal components analysis (PCA), independent components analysis (ICA), non-negative matrix factorization (NMF), and analysis of gene expression by denoising autoencoders (ADAGE). We compare performance in a supervised learning task predicting gene inactivation pan-cancer and in a latent space analysis of high grade serous ovarian cancer (HGSC) subtypes. We do not observe substantial differences across algorithms in the classification task. VAE latent spaces offer biological insights into HGSC subtype biology.
2208.01569
Lyle Poley
Lyle Poley, Joseph W. Baron, Tobias Galla
Generalised Lotka-Volterra model with hierarchical interactions
10 pages, 6 Figures
null
10.1103/PhysRevE.107.024313
null
q-bio.PE cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the analysis of complex ecosystems it is common to use random interaction coefficients, often assumed to be such that all species are statistically equivalent. In this work we relax this assumption by choosing interactions according to the cascade model, which we incorporate into a generalised Lotka-Volterra dynamical system. These interactions impose a hierarchy in the community. Species benefit more, on average, from interactions with species further below them in the hierarchy than from interactions with those above. Using dynamic mean-field theory, we demonstrate that a strong hierarchical structure is stabilising, but that it reduces the number of species in the surviving community, as well as their abundances. Additionally, we show that increased heterogeneity in the variances of the interaction coefficients across positions in the hierarchy is destabilising. We also comment on the structure of the surviving community and demonstrate that the abundance and probability of survival of a species is dependent on its position in the hierarchy.
[ { "created": "Tue, 2 Aug 2022 16:16:50 GMT", "version": "v1" } ]
2023-03-08
[ [ "Poley", "Lyle", "" ], [ "Baron", "Joseph W.", "" ], [ "Galla", "Tobias", "" ] ]
In the analysis of complex ecosystems it is common to use random interaction coefficients, often assumed to be such that all species are statistically equivalent. In this work we relax this assumption by choosing interactions according to the cascade model, which we incorporate into a generalised Lotka-Volterra dynamical system. These interactions impose a hierarchy in the community. Species benefit more, on average, from interactions with species further below them in the hierarchy than from interactions with those above. Using dynamic mean-field theory, we demonstrate that a strong hierarchical structure is stabilising, but that it reduces the number of species in the surviving community, as well as their abundances. Additionally, we show that increased heterogeneity in the variances of the interaction coefficients across positions in the hierarchy is destabilising. We also comment on the structure of the surviving community and demonstrate that the abundance and probability of survival of a species is dependent on its position in the hierarchy.
1612.04532
Yong Chen
Yong Chen
Influence of cell-cell interactions on the population growth rate in a tumor
5 pages, 2 figures
Commun. Theor. Phys. 68, 798 (2017)
10.1088/0253-6102/68/6/798
null
q-bio.PE physics.bio-ph q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The understanding of the macroscopic phenomenological models of the population growth at a microscopic level is important to predict the population behaviors emerged from the interactions between the individuals. In this work we consider the influence of the cell-cell interaction on the population growth rate $R$ in a tumor system, and show that, in most cases especially small proliferative probabilities, the regulative role of the interaction will be strengthened with the decline of the intrinsic proliferative probabilities. For the high replication rates of an individual and the cooperative interactions, the proliferative probability almost has no effect. We compute the dependences of $R$ on the interactions between the cells under the approximation of the nearest neighbor in the rim of an avascular tumor. Our results are helpful to qualitatively understand the influence of the interactions between the individuals on the growth rate in population systems.
[ { "created": "Wed, 14 Dec 2016 08:38:49 GMT", "version": "v1" }, { "created": "Wed, 22 Nov 2017 02:43:42 GMT", "version": "v2" } ]
2017-11-23
[ [ "Chen", "Yong", "" ] ]
The understanding of the macroscopic phenomenological models of the population growth at a microscopic level is important to predict the population behaviors emerged from the interactions between the individuals. In this work we consider the influence of the cell-cell interaction on the population growth rate $R$ in a tumor system, and show that, in most cases especially small proliferative probabilities, the regulative role of the interaction will be strengthened with the decline of the intrinsic proliferative probabilities. For the high replication rates of an individual and the cooperative interactions, the proliferative probability almost has no effect. We compute the dependences of $R$ on the interactions between the cells under the approximation of the nearest neighbor in the rim of an avascular tumor. Our results are helpful to qualitatively understand the influence of the interactions between the individuals on the growth rate in population systems.
2402.10990
Francesco Galimberti
Francesco Galimberti (1), Stephanie Bopp (1), Alessandro Carletti (1), Rui Catarino (1), Martin Claverie (1), Pietro Florio (1), Alessio Ippolito (2), Arwyn Jones (1), Flavio Marchetto (3), Michael Olvedy (1), Alberto Pistocchi (1), Astrid Verhegghen (1), Marijn Van Der Velde (1), Diana Vieira (1), Raphael d'Andrimont (4) ((1) European Commission, Joint Research Centre (JRC), Ispra, Italy (2) European Food Safety Authority (EFSA), Parma, Italy (3) European Chemicals Agency (ECHA), Helsinki, Finland (4) European Commission, Joint Research Centre (JRC), Brussels, Belgium)
From parcels to people: development of a spatially explicit risk indicator to monitor residential pesticide exposure in agricultural areas
40 pages, 4 tables, 22 figures
null
null
null
q-bio.QM q-bio.PE
http://creativecommons.org/licenses/by/4.0/
The increase in global pesticide use has mirrored the rising demand for food over the last decades, resulting in a boost in crop yields. However, concerns about the impact of pesticides on biodiversity, ecosystems, and human health, especially for populations residing close to cultivated areas, are growing. This study investigates how exposure and possible risks to residents can be estimated at high spatial granularity based on plant protection product data. The complexities of such analysis were explored in France, where relevant data with good granularity are publicly available. Integrating sets of spatial datasets and exposure assessment methodologies, we have developed an indicator to monitor the levels of pesticide risk faced by residents. By spatialising pesticide sales data according to their authorization on specific crops, we developed a detailed map depicting potential pesticide loads at parcel level across France. This spatial distribution served as the basis for an exposure and risk assessment, modelled following the European Food Safety Authority's guidelines. Combining the risk map with population distribution data, we have developed an indicator that allows to monitor patterns in non-dietary exposure to pesticides. Our results show that in France, on average, 13% of people might be exposed to pesticides due to living in the proximity to treated crops. This exposure is in the lower range for 34%, moderate range for 40% and higher range for 25% of the exposed population. The risk evaluation is based on worst case assumptions and values should not be taken as a regulatory risk assessment but as indicator to use, for example, for monitoring time trends. The purpose of this indicator is to demonstrate that more granular pesticide data can improve risk reduction strategies. Harmonized and high-resolution data can help in identifying regions where to focus on sustainable farming.
[ { "created": "Fri, 16 Feb 2024 12:05:21 GMT", "version": "v1" } ]
2024-02-20
[ [ "Galimberti", "Francesco", "" ], [ "Bopp", "Stephanie", "" ], [ "Carletti", "Alessandro", "" ], [ "Catarino", "Rui", "" ], [ "Claverie", "Martin", "" ], [ "Florio", "Pietro", "" ], [ "Ippolito", "Alessio", "" ], [ "Jones", "Arwyn", "" ], [ "Marchetto", "Flavio", "" ], [ "Olvedy", "Michael", "" ], [ "Pistocchi", "Alberto", "" ], [ "Verhegghen", "Astrid", "" ], [ "Van Der Velde", "Marijn", "" ], [ "Vieira", "Diana", "" ], [ "d'Andrimont", "Raphael", "" ] ]
The increase in global pesticide use has mirrored the rising demand for food over the last decades, resulting in a boost in crop yields. However, concerns about the impact of pesticides on biodiversity, ecosystems, and human health, especially for populations residing close to cultivated areas, are growing. This study investigates how exposure and possible risks to residents can be estimated at high spatial granularity based on plant protection product data. The complexities of such analysis were explored in France, where relevant data with good granularity are publicly available. Integrating sets of spatial datasets and exposure assessment methodologies, we have developed an indicator to monitor the levels of pesticide risk faced by residents. By spatialising pesticide sales data according to their authorization on specific crops, we developed a detailed map depicting potential pesticide loads at parcel level across France. This spatial distribution served as the basis for an exposure and risk assessment, modelled following the European Food Safety Authority's guidelines. Combining the risk map with population distribution data, we have developed an indicator that allows to monitor patterns in non-dietary exposure to pesticides. Our results show that in France, on average, 13% of people might be exposed to pesticides due to living in the proximity to treated crops. This exposure is in the lower range for 34%, moderate range for 40% and higher range for 25% of the exposed population. The risk evaluation is based on worst case assumptions and values should not be taken as a regulatory risk assessment but as indicator to use, for example, for monitoring time trends. The purpose of this indicator is to demonstrate that more granular pesticide data can improve risk reduction strategies. Harmonized and high-resolution data can help in identifying regions where to focus on sustainable farming.
1803.01236
Marcos Trevisan Dr.
Alan Taitz, Diego E Shalom, Marcos A Trevisan
Vocal effort modulates the motor planning of short speech structures
17 pages, 3 figures
Phys. Rev. E 97, 052406 (2018)
10.1103/PhysRevE.97.052406
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Speech requires programming the sequence of vocal gestures that produce the sounds of words. Here we explored the timing of this program by asking our participants to pronounce, as quickly as possible, a sequence of consonant-consonant-vowel (CCV) structures appearing on screen. We measured the delay between visual presentation and voice onset. In the case of plosive consonants, produced by sharp and well defined movements of the vocal tract, we found that delays are positively correlated with the duration of the transition between consonants. We then used a battery of statistical tests and mathematical vocal models to show that delays reflect the motor planning of CCVs and transitions are proxy indicators of the vocal effort needed to produce them. These results support that the effort required to produce the sequence of movements of a vocal gesture modulates the onset of the motor plan.
[ { "created": "Sat, 3 Mar 2018 20:45:33 GMT", "version": "v1" } ]
2018-05-23
[ [ "Taitz", "Alan", "" ], [ "Shalom", "Diego E", "" ], [ "Trevisan", "Marcos A", "" ] ]
Speech requires programming the sequence of vocal gestures that produce the sounds of words. Here we explored the timing of this program by asking our participants to pronounce, as quickly as possible, a sequence of consonant-consonant-vowel (CCV) structures appearing on screen. We measured the delay between visual presentation and voice onset. In the case of plosive consonants, produced by sharp and well defined movements of the vocal tract, we found that delays are positively correlated with the duration of the transition between consonants. We then used a battery of statistical tests and mathematical vocal models to show that delays reflect the motor planning of CCVs and transitions are proxy indicators of the vocal effort needed to produce them. These results support that the effort required to produce the sequence of movements of a vocal gesture modulates the onset of the motor plan.
1005.4342
Iain Johnston
Sam F. Greenbury and Iain G. Johnston and Matthew A. Smith and Jonathan P. K. Doye and Ard A. Louis
The effect of scale-free topology on the robustness and evolvability of genetic regulatory networks
16 pages, 15 figures
J. Theor. Biol. 267, 48-61 (2010)
10.1016/j.jtbi.2010.08.006
null
q-bio.PE q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate how scale-free (SF) and Erdos-Renyi (ER) topologies affect the interplay between evolvability and robustness of model gene regulatory networks with Boolean threshold dynamics. In agreement with Oikonomou and Cluzel (2006) we find that networks with SFin topologies, that is SF topology for incoming nodes and ER topology for outgoing nodes, are significantly more evolvable towards specific oscillatory targets than networks with ER topology for both incoming and outgoing nodes. Similar results are found for networks with SFboth and SFout topologies. The functionality of the SFout topology, which most closely resembles the structure of biological gene networks (Babu et al., 2004), is compared to the ER topology in further detail through an extension to multiple target outputs, with either an oscillatory or a non-oscillatory nature. For multiple oscillatory targets of the same length, the differences between SFout and ER networks are enhanced, but for non-oscillatory targets both types of networks show fairly similar evolvability. We find that SF networks generate oscillations much more easily than ER networks do, and this may explain why SF networks are more evolvable than ER networks are for oscillatory phenotypes. In spite of their greater evolvability, we find that networks with SFout topologies are also more robust to mutations than ER networks. Furthermore, the SFout topologies are more robust to changes in initial conditions (environmental robustness). For both topologies, we find that once a population of networks has reached the target state, further neutral evolution can lead to an increase in both the mutational robustness and the environmental robustness to changes in initial conditions.
[ { "created": "Mon, 24 May 2010 14:49:37 GMT", "version": "v1" } ]
2013-09-03
[ [ "Greenbury", "Sam F.", "" ], [ "Johnston", "Iain G.", "" ], [ "Smith", "Matthew A.", "" ], [ "Doye", "Jonathan P. K.", "" ], [ "Louis", "Ard A.", "" ] ]
We investigate how scale-free (SF) and Erdos-Renyi (ER) topologies affect the interplay between evolvability and robustness of model gene regulatory networks with Boolean threshold dynamics. In agreement with Oikonomou and Cluzel (2006) we find that networks with SFin topologies, that is SF topology for incoming nodes and ER topology for outgoing nodes, are significantly more evolvable towards specific oscillatory targets than networks with ER topology for both incoming and outgoing nodes. Similar results are found for networks with SFboth and SFout topologies. The functionality of the SFout topology, which most closely resembles the structure of biological gene networks (Babu et al., 2004), is compared to the ER topology in further detail through an extension to multiple target outputs, with either an oscillatory or a non-oscillatory nature. For multiple oscillatory targets of the same length, the differences between SFout and ER networks are enhanced, but for non-oscillatory targets both types of networks show fairly similar evolvability. We find that SF networks generate oscillations much more easily than ER networks do, and this may explain why SF networks are more evolvable than ER networks are for oscillatory phenotypes. In spite of their greater evolvability, we find that networks with SFout topologies are also more robust to mutations than ER networks. Furthermore, the SFout topologies are more robust to changes in initial conditions (environmental robustness). For both topologies, we find that once a population of networks has reached the target state, further neutral evolution can lead to an increase in both the mutational robustness and the environmental robustness to changes in initial conditions.
1309.5458
Tom McLeish FRS
Tom C B McLeish, Thomas L Rogers, Mark R Wilson
Allostery without conformation change: modelling protein dynamics at multiple scales
20 Pages, 8 figures
T. C. B. McLeish, T. L. Rodgers and M. R. Wilson, Phys. Biol., 10, 056004 (2013)
10.1088/1478-3975/10/5/056004
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The original ideas of Cooper and Dryden, that allosteric signalling can be induced between distant binding sites on proteins without any change in mean structural conformation, has proved to be a remarkably prescient insight into the rich structure of protein dynamics. It represents an alternative to the celebrated Monod-Wyman-Changeux mechanism and proposes that modulation of the amplitude of thermal fluctuations around a mean structure, rather than shifts in the structure itself, give rise to allostery in ligand binding. In a complementary approach to experiments on real proteins, here we take a theoretical route to identify the necessary structural components of this mechanism. By reviewing and extending an approach that moves from very coarse-grained to more detailed models, we show that, a fundamental requirement for a body supporting fluctuation-induced allostery is a strongly inhomogeneous elastic modulus. This requirement is reflected in many real proteins, where a good approximation of the elastic structure maps strongly coherent domains onto rigid blocks connected by more flexible interface regions.
[ { "created": "Sat, 21 Sep 2013 10:13:36 GMT", "version": "v1" } ]
2013-09-24
[ [ "McLeish", "Tom C B", "" ], [ "Rogers", "Thomas L", "" ], [ "Wilson", "Mark R", "" ] ]
The original ideas of Cooper and Dryden, that allosteric signalling can be induced between distant binding sites on proteins without any change in mean structural conformation, has proved to be a remarkably prescient insight into the rich structure of protein dynamics. It represents an alternative to the celebrated Monod-Wyman-Changeux mechanism and proposes that modulation of the amplitude of thermal fluctuations around a mean structure, rather than shifts in the structure itself, give rise to allostery in ligand binding. In a complementary approach to experiments on real proteins, here we take a theoretical route to identify the necessary structural components of this mechanism. By reviewing and extending an approach that moves from very coarse-grained to more detailed models, we show that, a fundamental requirement for a body supporting fluctuation-induced allostery is a strongly inhomogeneous elastic modulus. This requirement is reflected in many real proteins, where a good approximation of the elastic structure maps strongly coherent domains onto rigid blocks connected by more flexible interface regions.
1409.2182
Garrett Evans
Garrett N. Evans
Convolution Metric for Neuron Membrane Potential Recordings
31 pages, 4 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
I provide a convolution metric which takes neural membrane potential recordings as arguments and compares their subthreshold features along with the timing and number of spikes within them--summarizing differences in these with a single "distance" between the recordings. Based on van Rossum's 2001 metric for spike trains, the metric relies on a convolution operation that it performs on the input data. The kernel used for the convolution is carefully chosen such that it produces a desirable frequency space response and, unlike van Rossum's kernel, causes the metric to be first order both in differences between nearby spike times and in differences between same-time membrane potential values: an important trait. 31 pages, 4 figures.
[ { "created": "Mon, 8 Sep 2014 01:27:18 GMT", "version": "v1" } ]
2014-09-09
[ [ "Evans", "Garrett N.", "" ] ]
I provide a convolution metric which takes neural membrane potential recordings as arguments and compares their subthreshold features along with the timing and number of spikes within them--summarizing differences in these with a single "distance" between the recordings. Based on van Rossum's 2001 metric for spike trains, the metric relies on a convolution operation that it performs on the input data. The kernel used for the convolution is carefully chosen such that it produces a desirable frequency space response and, unlike van Rossum's kernel, causes the metric to be first order both in differences between nearby spike times and in differences between same-time membrane potential values: an important trait. 31 pages, 4 figures.
0704.3948
Alexey Mazur K
Alexey K. Mazur
The Worm-Like Chain Theory And Bending Of Short DNA
4 pages, 3 figures, to appear in PRL
Phys. Rev. Lett. 98, 218102, 2007.
10.1103/PhysRevLett.98.218102
null
q-bio.BM cond-mat.soft physics.bio-ph
null
The probability distributions for bending angles in double helical DNA obtained in all-atom molecular dynamics simulations are compared with theoretical predictions. The computed distributions remarkably agree with the worm-like chain theory for double helices of one helical turn and longer, and qualitatively differ from predictions of the semi-elastic chain model. The computed data exhibit only small anomalies in the apparent flexibility of short DNA and cannot account for the recently reported AFM data (Wiggins et al, Nature nanotechnology 1, 137 (2006)). It is possible that the current atomistic DNA models miss some essential mechanisms of DNA bending on intermediate length scales. Analysis of bent DNA structures reveals, however, that the bending motion is structurally heterogeneous and directionally anisotropic on the intermediate length scales where the experimental anomalies were detected. These effects are essential for interpretation of the experimental data and they also can be responsible for the apparent discrepancy.
[ { "created": "Mon, 30 Apr 2007 14:24:59 GMT", "version": "v1" } ]
2009-11-13
[ [ "Mazur", "Alexey K.", "" ] ]
The probability distributions for bending angles in double helical DNA obtained in all-atom molecular dynamics simulations are compared with theoretical predictions. The computed distributions remarkably agree with the worm-like chain theory for double helices of one helical turn and longer, and qualitatively differ from predictions of the semi-elastic chain model. The computed data exhibit only small anomalies in the apparent flexibility of short DNA and cannot account for the recently reported AFM data (Wiggins et al, Nature nanotechnology 1, 137 (2006)). It is possible that the current atomistic DNA models miss some essential mechanisms of DNA bending on intermediate length scales. Analysis of bent DNA structures reveals, however, that the bending motion is structurally heterogeneous and directionally anisotropic on the intermediate length scales where the experimental anomalies were detected. These effects are essential for interpretation of the experimental data and they also can be responsible for the apparent discrepancy.
1308.1564
Tibor Antal
Tibor Antal, P. L. Krapivsky, M. A. Nowak
Spatial evolution of tumors with successive driver mutations
16 pages
null
null
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the spatial evolutionary dynamics of solid tumors as they obtain additional driver mutations. We start with a cancer clone that expands uniformly in three dimensions giving rise to a spherical shape. We assume that cell division occurs on the surface of the growing tumor. Each cell division has a chance to give rise to a mutation that activates an additional driver gene. The resulting clone has an enhanced growth rate, which generates a local ensemble of faster growing cells, thereby distorting the spherical shape of the tumor. We derive analytic formulas for the geometric boundary that separates the original cancer clone from the new mutant as well as the expanding frontier of the new mutant. The total number of original cancer cells converges to a constant as time goes to infinity, because this clone becomes enveloped by mutants. We derive formulas for the abundance and diversity of additional driver mutations as function of time. Our model is semi-deterministic: the spatial growth of the various cancer clones follows deterministic equations, but the arrival of a new mutant is a stochastic event.
[ { "created": "Wed, 7 Aug 2013 13:19:26 GMT", "version": "v1" } ]
2013-08-08
[ [ "Antal", "Tibor", "" ], [ "Krapivsky", "P. L.", "" ], [ "Nowak", "M. A.", "" ] ]
We study the spatial evolutionary dynamics of solid tumors as they obtain additional driver mutations. We start with a cancer clone that expands uniformly in three dimensions giving rise to a spherical shape. We assume that cell division occurs on the surface of the growing tumor. Each cell division has a chance to give rise to a mutation that activates an additional driver gene. The resulting clone has an enhanced growth rate, which generates a local ensemble of faster growing cells, thereby distorting the spherical shape of the tumor. We derive analytic formulas for the geometric boundary that separates the original cancer clone from the new mutant as well as the expanding frontier of the new mutant. The total number of original cancer cells converges to a constant as time goes to infinity, because this clone becomes enveloped by mutants. We derive formulas for the abundance and diversity of additional driver mutations as function of time. Our model is semi-deterministic: the spatial growth of the various cancer clones follows deterministic equations, but the arrival of a new mutant is a stochastic event.