id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2404.16357
Zhichao Liang
Zhichao Liang, Yinuo Zhang, Jushen Wu, and Quanying Liu
Reverse engineering the brain input: Network control theory to identify cognitive task-related control nodes
null
null
null
null
q-bio.NC cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-sa/4.0/
The human brain receives complex inputs when performing cognitive tasks, which range from external inputs via the senses to internal inputs from other brain regions. However, the explicit inputs to the brain during a cognitive task remain unclear. Here, we present an input identification framework for reverse engineering the control nodes and the corresponding inputs to the brain. The framework is verified with synthetic data generated by a predefined linear system, indicating it can robustly reconstruct data and recover the inputs. Then we apply the framework to the real motor-task fMRI data from 200 human subjects. Our results show that the model with sparse inputs can reconstruct neural dynamics in motor tasks ($EV=0.779$) and the identified 28 control nodes largely overlap with the motor system. Underpinned by network control theory, our framework offers a general tool for understanding brain inputs.
[ { "created": "Thu, 25 Apr 2024 06:36:00 GMT", "version": "v1" } ]
2024-04-26
[ [ "Liang", "Zhichao", "" ], [ "Zhang", "Yinuo", "" ], [ "Wu", "Jushen", "" ], [ "Liu", "Quanying", "" ] ]
The human brain receives complex inputs when performing cognitive tasks, which range from external inputs via the senses to internal inputs from other brain regions. However, the explicit inputs to the brain during a cognitive task remain unclear. Here, we present an input identification framework for reverse engineering the control nodes and the corresponding inputs to the brain. The framework is verified with synthetic data generated by a predefined linear system, indicating it can robustly reconstruct data and recover the inputs. Then we apply the framework to the real motor-task fMRI data from 200 human subjects. Our results show that the model with sparse inputs can reconstruct neural dynamics in motor tasks ($EV=0.779$) and the identified 28 control nodes largely overlap with the motor system. Underpinned by network control theory, our framework offers a general tool for understanding brain inputs.
2003.14360
Ryan Renslow
Katherine J. Schultz, Sean M. Colby, Yasemin Yesiltepe, Jamie R. Nu\~nez, Monee Y. McGrady, Ryan R. Renslow
Application and Assessment of Deep Learning for the Generation of Potential NMDA Receptor Antagonists
null
null
10.1039/D0CP03620J
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Uncompetitive antagonists of the N-methyl D-aspartate receptor (NMDAR) have demonstrated therapeutic benefit in the treatment of neurological diseases such as Parkinson's and Alzheimer's, but some also cause dissociative effects that have led to the synthesis of illicit drugs. The ability to generate NMDAR antagonists in silico is therefore desirable both for new medication development and for preempting and identifying new designer drugs. Recently, generative deep learning models have been applied to de novo drug design as a means to expand the amount of chemical space that can be explored for potential drug-like compounds. In this study, we assess the application of a generative model to the NMDAR to achieve two primary objectives: (i) the creation and release of a comprehensive library of experimentally validated NMDAR phencyclidine (PCP) site antagonists to assist the drug discovery community and (ii) an analysis of both the advantages conferred by applying such generative artificial intelligence models to drug design and the current limitations of the approach. We apply, and provide source code for, a variety of ligand- and structure-based assessment techniques used in standard drug discovery analyses to the deep learning-generated compounds. We present twelve candidate antagonists that are not available in existing chemical databases to provide an example of what this type of workflow can achieve, though synthesis and experimental validation of these compounds is still required.
[ { "created": "Tue, 31 Mar 2020 16:41:18 GMT", "version": "v1" } ]
2021-01-14
[ [ "Schultz", "Katherine J.", "" ], [ "Colby", "Sean M.", "" ], [ "Yesiltepe", "Yasemin", "" ], [ "Nuñez", "Jamie R.", "" ], [ "McGrady", "Monee Y.", "" ], [ "Renslow", "Ryan R.", "" ] ]
Uncompetitive antagonists of the N-methyl D-aspartate receptor (NMDAR) have demonstrated therapeutic benefit in the treatment of neurological diseases such as Parkinson's and Alzheimer's, but some also cause dissociative effects that have led to the synthesis of illicit drugs. The ability to generate NMDAR antagonists in silico is therefore desirable both for new medication development and for preempting and identifying new designer drugs. Recently, generative deep learning models have been applied to de novo drug design as a means to expand the amount of chemical space that can be explored for potential drug-like compounds. In this study, we assess the application of a generative model to the NMDAR to achieve two primary objectives: (i) the creation and release of a comprehensive library of experimentally validated NMDAR phencyclidine (PCP) site antagonists to assist the drug discovery community and (ii) an analysis of both the advantages conferred by applying such generative artificial intelligence models to drug design and the current limitations of the approach. We apply, and provide source code for, a variety of ligand- and structure-based assessment techniques used in standard drug discovery analyses to the deep learning-generated compounds. We present twelve candidate antagonists that are not available in existing chemical databases to provide an example of what this type of workflow can achieve, though synthesis and experimental validation of these compounds is still required.
1609.08108
Joel Miller
Joel C. Miller
Mathematical models of SIR disease spread with combined non-sexual and sexual transmission routes
null
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The emergence of diseases such as Zika and Ebola has highlighted the need to understand the role of sexual transmission in the spread of diseases with a primarily non-sexual transmission route. In this paper we develop a number of low-dimensional models which are appropriate for a range of assumptions for how a disease will spread if it has sexual transmission through a sexual contact network combined with some other transmission mechanism, such as direct contact or vectors. The equations derived provide exact predictions for the dynamics of the corresponding simulations in the large population limit.
[ { "created": "Mon, 26 Sep 2016 18:32:21 GMT", "version": "v1" }, { "created": "Wed, 2 Nov 2016 03:38:25 GMT", "version": "v2" } ]
2016-11-03
[ [ "Miller", "Joel C.", "" ] ]
The emergence of diseases such as Zika and Ebola has highlighted the need to understand the role of sexual transmission in the spread of diseases with a primarily non-sexual transmission route. In this paper we develop a number of low-dimensional models which are appropriate for a range of assumptions for how a disease will spread if it has sexual transmission through a sexual contact network combined with some other transmission mechanism, such as direct contact or vectors. The equations derived provide exact predictions for the dynamics of the corresponding simulations in the large population limit.
0910.1219
EPTCS
Roberto Barbuti (University of Pisa), Giulio Caravagna (University of Pisa), Paolo Milazzo (University of Pisa), Andrea Maggiolo-Schettini (University of Pisa)
On the Interpretation of Delays in Delay Stochastic Simulation of Biological Systems
null
EPTCS 6, 2009, pp. 17-29
10.4204/EPTCS.6.2
null
q-bio.QM cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Delays in biological systems may be used to model events for which the underlying dynamics cannot be precisely observed. Mathematical modeling of biological systems with delays is usually based on Delay Differential Equations (DDEs), a kind of differential equations in which the derivative of the unknown function at a certain time is given in terms of the values of the function at previous times. In the literature, delay stochastic simulation algorithms have been proposed. These algorithms follow a "delay as duration" approach, namely they are based on an interpretation of a delay as the elapsing time between the start and the termination of a chemical reaction. This interpretation is not suitable for some classes of biological systems in which species involved in a delayed interaction can be involved at the same time in other interactions. We show on a DDE model of tumor growth that the delay as duration approach for stochastic simulation is not precise, and we propose a simulation algorithm based on a ``purely delayed'' interpretation of delays which provides better results on the considered model.
[ { "created": "Wed, 7 Oct 2009 11:25:03 GMT", "version": "v1" } ]
2009-10-08
[ [ "Barbuti", "Roberto", "", "University of Pisa" ], [ "Caravagna", "Giulio", "", "University of\n Pisa" ], [ "Milazzo", "Paolo", "", "University of Pisa" ], [ "Maggiolo-Schettini", "Andrea", "", "University of Pisa" ] ]
Delays in biological systems may be used to model events for which the underlying dynamics cannot be precisely observed. Mathematical modeling of biological systems with delays is usually based on Delay Differential Equations (DDEs), a kind of differential equations in which the derivative of the unknown function at a certain time is given in terms of the values of the function at previous times. In the literature, delay stochastic simulation algorithms have been proposed. These algorithms follow a "delay as duration" approach, namely they are based on an interpretation of a delay as the elapsing time between the start and the termination of a chemical reaction. This interpretation is not suitable for some classes of biological systems in which species involved in a delayed interaction can be involved at the same time in other interactions. We show on a DDE model of tumor growth that the delay as duration approach for stochastic simulation is not precise, and we propose a simulation algorithm based on a ``purely delayed'' interpretation of delays which provides better results on the considered model.
2206.05354
Fei He
Dominik Klepl, Fei He, Min Wu, Daniel J. Blackburn, Ptolemaios G. Sarrigiannis
Bispectrum-based Cross-frequency Functional Connectivity: Classification of Alzheimer's disease
5 pages, 4 figures, conference
null
null
null
q-bio.NC eess.SP q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Alzheimer's disease (AD) is a neurodegenerative disease known to affect brain functional connectivity (FC). Linear FC measures have been applied to study the differences in AD by splitting neurophysiological signals such as electroencephalography (EEG) recordings into discrete frequency bands and analysing them in isolation. We address this limitation by quantifying cross-frequency FC in addition to the traditional within-band approach. Cross-bispectrum, a higher-order spectral analysis, is used to measure the nonlinear FC and is compared with the cross-spectrum, which only measures the linear FC within bands. Each frequency coupling is then used to construct an FC network, which is in turn vectorised and used to train a classifier. We show that fusing features from networks improves classification accuracy. Although both within-frequency and cross-frequency networks can be used to predict AD with high accuracy, our results show that bispectrum-based FC outperforms cross-spectrum suggesting an important role of cross-frequency FC.
[ { "created": "Fri, 10 Jun 2022 20:53:15 GMT", "version": "v1" } ]
2022-06-14
[ [ "Klepl", "Dominik", "" ], [ "He", "Fei", "" ], [ "Wu", "Min", "" ], [ "Blackburn", "Daniel J.", "" ], [ "Sarrigiannis", "Ptolemaios G.", "" ] ]
Alzheimer's disease (AD) is a neurodegenerative disease known to affect brain functional connectivity (FC). Linear FC measures have been applied to study the differences in AD by splitting neurophysiological signals such as electroencephalography (EEG) recordings into discrete frequency bands and analysing them in isolation. We address this limitation by quantifying cross-frequency FC in addition to the traditional within-band approach. Cross-bispectrum, a higher-order spectral analysis, is used to measure the nonlinear FC and is compared with the cross-spectrum, which only measures the linear FC within bands. Each frequency coupling is then used to construct an FC network, which is in turn vectorised and used to train a classifier. We show that fusing features from networks improves classification accuracy. Although both within-frequency and cross-frequency networks can be used to predict AD with high accuracy, our results show that bispectrum-based FC outperforms cross-spectrum suggesting an important role of cross-frequency FC.
2109.05019
Sarwan Ali
Sarwan Ali, Murray Patterson
Spike2Vec: An Efficient and Scalable Embedding Approach for COVID-19 Spike Sequences
Accepted at IEEE International Conference on Big Data (IEEE Big Data)
null
null
null
q-bio.GN cs.LG
http://creativecommons.org/publicdomain/zero/1.0/
With the rapid global spread of COVID-19, more and more data related to this virus is becoming available, including genomic sequence data. The total number of genomic sequences that are publicly available on platforms such as GISAID is currently several million, and is increasing with every day. The availability of such \emph{Big Data} creates a new opportunity for researchers to study this virus in detail. This is particularly important with all of the dynamics of the COVID-19 variants which emerge and circulate. This rich data source will give us insights on the best ways to perform genomic surveillance for this and future pandemic threats, with the ultimate goal of mitigating or eliminating such threats. Analyzing and processing the several million genomic sequences is a challenging task. Although traditional methods for sequence classification are proven to be effective, they are not designed to deal with these specific types of genomic sequences. Moreover, most of the existing methods also face the issue of scalability. Previous studies which were tailored to coronavirus genomic data proposed to use spike sequences (corresponding to a subsequence of the genome), rather than using the complete genomic sequence, to perform different machine learning (ML) tasks such as classification and clustering. However, those methods suffer from scalability issues. In this paper, we propose an approach called Spike2Vec, an efficient and scalable feature vector representation for each spike sequence that can be used for downstream ML tasks. Through experiments, we show that Spike2Vec is not only scalable on several million spike sequences, but also outperforms the baseline models in terms of prediction accuracy, F1 score, etc.
[ { "created": "Sun, 12 Sep 2021 03:16:27 GMT", "version": "v1" }, { "created": "Sat, 9 Oct 2021 13:07:23 GMT", "version": "v2" }, { "created": "Mon, 18 Oct 2021 19:33:46 GMT", "version": "v3" }, { "created": "Mon, 15 Nov 2021 16:25:07 GMT", "version": "v4" } ]
2021-11-16
[ [ "Ali", "Sarwan", "" ], [ "Patterson", "Murray", "" ] ]
With the rapid global spread of COVID-19, more and more data related to this virus is becoming available, including genomic sequence data. The total number of genomic sequences that are publicly available on platforms such as GISAID is currently several million, and is increasing with every day. The availability of such \emph{Big Data} creates a new opportunity for researchers to study this virus in detail. This is particularly important with all of the dynamics of the COVID-19 variants which emerge and circulate. This rich data source will give us insights on the best ways to perform genomic surveillance for this and future pandemic threats, with the ultimate goal of mitigating or eliminating such threats. Analyzing and processing the several million genomic sequences is a challenging task. Although traditional methods for sequence classification are proven to be effective, they are not designed to deal with these specific types of genomic sequences. Moreover, most of the existing methods also face the issue of scalability. Previous studies which were tailored to coronavirus genomic data proposed to use spike sequences (corresponding to a subsequence of the genome), rather than using the complete genomic sequence, to perform different machine learning (ML) tasks such as classification and clustering. However, those methods suffer from scalability issues. In this paper, we propose an approach called Spike2Vec, an efficient and scalable feature vector representation for each spike sequence that can be used for downstream ML tasks. Through experiments, we show that Spike2Vec is not only scalable on several million spike sequences, but also outperforms the baseline models in terms of prediction accuracy, F1 score, etc.
1905.02613
Sarah McIntyre
Sarah McIntyre, Athanasia Moungou, Rebecca Boehme, Peder M. Isager, Frances Lau, Ali Israr, Ellen A. Lumpkin, Freddy Abnousi, H{\aa}kan Olausson
Affective touch communication in close adult relationships
Technical paper accepted for presentation at World Haptics 2019. Data and materials available: https://doi.org/10.17605/OSF.IO/7XRWC
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inter-personal touch is a powerful aspect of social interaction that we expect to be particularly important for emotional communication. We studied the capacity of closely acquainted humans to signal the meaning of several word cues (e.g. gratitude, sadness) using touch sensation alone. Participants communicated all cues with above chance performance. We show that emotionally close people can accurately signal the meaning of different words through touch, and that performance is affected by the amount of contextual information available. Even with minimal context and feedback, both attention-getting and love were communicated surprisingly well. Neither the type of close relationship, nor self-reported comfort with touch significantly affected performance.
[ { "created": "Tue, 7 May 2019 14:33:59 GMT", "version": "v1" } ]
2019-05-08
[ [ "McIntyre", "Sarah", "" ], [ "Moungou", "Athanasia", "" ], [ "Boehme", "Rebecca", "" ], [ "Isager", "Peder M.", "" ], [ "Lau", "Frances", "" ], [ "Israr", "Ali", "" ], [ "Lumpkin", "Ellen A.", "" ], [ ...
Inter-personal touch is a powerful aspect of social interaction that we expect to be particularly important for emotional communication. We studied the capacity of closely acquainted humans to signal the meaning of several word cues (e.g. gratitude, sadness) using touch sensation alone. Participants communicated all cues with above chance performance. We show that emotionally close people can accurately signal the meaning of different words through touch, and that performance is affected by the amount of contextual information available. Even with minimal context and feedback, both attention-getting and love were communicated surprisingly well. Neither the type of close relationship, nor self-reported comfort with touch significantly affected performance.
1407.4656
Pierangelo Lombardo
Pierangelo Lombardo, Andrea Gambassi, Luca Dall'Asta
Fixation properties of subdivided populations with balancing selection
19 pages, 13 figures
Phys. Rev. E 91, 032130 (2015)
10.1103/PhysRevE.91.032130
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In subdivided populations, migration acts together with selection and genetic drift and determines their evolution. Building up on a recently proposed method, which hinges on the emergence of a time scale separation between local and global dynamics, we study the fixation properties of subdivided populations in the presence of balancing selection. The approximation implied by the method is accurate when the effective selection strength is small and the number of subpopulations is large. In particular, it predicts a phase transition between species coexistence and biodiversity loss in the infinite-size limit and, in finite populations, a nonmonotonic dependence of the mean fixation time on the migration rate. In order to investigate the fixation properties of the subdivided population for stronger selection, we introduce an effective coarser description of the dynamics in terms of a voter model with intermediate states, which highlights the basic mechanisms driving the evolutionary process.
[ { "created": "Thu, 17 Jul 2014 12:48:59 GMT", "version": "v1" }, { "created": "Fri, 20 Mar 2015 21:09:18 GMT", "version": "v2" } ]
2015-03-24
[ [ "Lombardo", "Pierangelo", "" ], [ "Gambassi", "Andrea", "" ], [ "Dall'Asta", "Luca", "" ] ]
In subdivided populations, migration acts together with selection and genetic drift and determines their evolution. Building up on a recently proposed method, which hinges on the emergence of a time scale separation between local and global dynamics, we study the fixation properties of subdivided populations in the presence of balancing selection. The approximation implied by the method is accurate when the effective selection strength is small and the number of subpopulations is large. In particular, it predicts a phase transition between species coexistence and biodiversity loss in the infinite-size limit and, in finite populations, a nonmonotonic dependence of the mean fixation time on the migration rate. In order to investigate the fixation properties of the subdivided population for stronger selection, we introduce an effective coarser description of the dynamics in terms of a voter model with intermediate states, which highlights the basic mechanisms driving the evolutionary process.
q-bio/0505009
Yong-Yeol Ahn
Yong-Yeol Ahn, Beom Jun Kim, Hawoong Jeong
Wiring cost in the organization of a biological network
null
null
10.1016/j.physa.2005.12.013
null
q-bio.NC
null
To find out the role of the wiring cost in the organization of the neural network of the nematode \textit{Caenorhapditis elegans} (\textit{C. elegans}), we build the neuronal map of \textit{C. elegans} based on geometrical positions of neurons and define the cost as inter-neuronal Euclidean distance \textit{d}. We show that the wiring probability decays exponentially as a function of \textit{d}. Using the edge exchanging method and the component placement optimization scheme, we show that positions of neurons are not randomly distributed but organized to reduce the total wiring cost. Furthermore, we numerically study the trade-off between the wiring cost and the performance of the Hopfield model on the neural network.
[ { "created": "Wed, 4 May 2005 20:05:49 GMT", "version": "v1" } ]
2009-11-11
[ [ "Ahn", "Yong-Yeol", "" ], [ "Kim", "Beom Jun", "" ], [ "Jeong", "Hawoong", "" ] ]
To find out the role of the wiring cost in the organization of the neural network of the nematode \textit{Caenorhapditis elegans} (\textit{C. elegans}), we build the neuronal map of \textit{C. elegans} based on geometrical positions of neurons and define the cost as inter-neuronal Euclidean distance \textit{d}. We show that the wiring probability decays exponentially as a function of \textit{d}. Using the edge exchanging method and the component placement optimization scheme, we show that positions of neurons are not randomly distributed but organized to reduce the total wiring cost. Furthermore, we numerically study the trade-off between the wiring cost and the performance of the Hopfield model on the neural network.
1604.07176
Zhen Li
Zhen Li and Yizhou Yu
Protein Secondary Structure Prediction Using Cascaded Convolutional and Recurrent Neural Networks
8 pages, 3 figures, Accepted by International Joint Conferences on Artificial Intelligence (IJCAI)
null
null
null
q-bio.BM cs.AI cs.LG cs.NE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein secondary structure prediction is an important problem in bioinformatics. Inspired by the recent successes of deep neural networks, in this paper, we propose an end-to-end deep network that predicts protein secondary structures from integrated local and global contextual features. Our deep architecture leverages convolutional neural networks with different kernel sizes to extract multiscale local contextual features. In addition, considering long-range dependencies existing in amino acid sequences, we set up a bidirectional neural network consisting of gated recurrent unit to capture global contextual features. Furthermore, multi-task learning is utilized to predict secondary structure labels and amino-acid solvent accessibility simultaneously. Our proposed deep network demonstrates its effectiveness by achieving state-of-the-art performance, i.e., 69.7% Q8 accuracy on the public benchmark CB513, 76.9% Q8 accuracy on CASP10 and 73.1% Q8 accuracy on CASP11. Our model and results are publicly available.
[ { "created": "Mon, 25 Apr 2016 09:17:18 GMT", "version": "v1" } ]
2016-04-27
[ [ "Li", "Zhen", "" ], [ "Yu", "Yizhou", "" ] ]
Protein secondary structure prediction is an important problem in bioinformatics. Inspired by the recent successes of deep neural networks, in this paper, we propose an end-to-end deep network that predicts protein secondary structures from integrated local and global contextual features. Our deep architecture leverages convolutional neural networks with different kernel sizes to extract multiscale local contextual features. In addition, considering long-range dependencies existing in amino acid sequences, we set up a bidirectional neural network consisting of gated recurrent unit to capture global contextual features. Furthermore, multi-task learning is utilized to predict secondary structure labels and amino-acid solvent accessibility simultaneously. Our proposed deep network demonstrates its effectiveness by achieving state-of-the-art performance, i.e., 69.7% Q8 accuracy on the public benchmark CB513, 76.9% Q8 accuracy on CASP10 and 73.1% Q8 accuracy on CASP11. Our model and results are publicly available.
1301.2528
Jose Manuel Mas
Albert Pujol, Raquel Valls, Vesna Radovanovic, Emre Guney, Javier Garcia-Garcia, Victor Codony Domenech, Laura Corredor Gonzalez, J .M. Mas, Baldo Oliva
Virtual-organism toy-model as a tool to develop bioinformatics approaches of Systems Biology for medical-target discovery
KEY WORDS: Systems Biology Functional Analysis Topological Analysis Algorithm Protein network
null
null
null
q-bio.MN q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Systems Biology has emerged in the last years as a new holistic approach based on the global understanding of cells instead of only being focused on their individual parts (genes or proteins), to better understand the complexity of human cells. Since the Systems Biology still does not provide the most accurate answers to our questions due to the complexity of cells and the limited quality of available information to perform a good gene/protein map analysis, we have created simpler models to ensure easier analysis of the map that represents the human cell. Therefore, a virtual organism has been designed according to the main physiological rules for humans in order to replicate the human organism and its vital functions. This toy model was constructed by defining the topology of its genes/proteins and the biological functions associated to it. There are several examples of these toy models that emulate natural processes to perform analysis of the virtual life in order to design the best strategy to understand real life. The strategy applied in this study combines topological and functional analysis integrating the knowledge about the relative position of a node among the others in the map with the conclusions generated by mathematical models that reproduce functional data of the virtual organism. Our results demonstrate that the combination of both strategies allows better understanding of our virtual organism even with the lower input of information needed and therefore it can be a potential tool to better understand the real life.
[ { "created": "Fri, 11 Jan 2013 16:00:30 GMT", "version": "v1" } ]
2013-01-14
[ [ "Pujol", "Albert", "" ], [ "Valls", "Raquel", "" ], [ "Radovanovic", "Vesna", "" ], [ "Guney", "Emre", "" ], [ "Garcia-Garcia", "Javier", "" ], [ "Domenech", "Victor Codony", "" ], [ "Gonzalez", "Laura Corredor...
Systems Biology has emerged in the last years as a new holistic approach based on the global understanding of cells instead of only being focused on their individual parts (genes or proteins), to better understand the complexity of human cells. Since the Systems Biology still does not provide the most accurate answers to our questions due to the complexity of cells and the limited quality of available information to perform a good gene/protein map analysis, we have created simpler models to ensure easier analysis of the map that represents the human cell. Therefore, a virtual organism has been designed according to the main physiological rules for humans in order to replicate the human organism and its vital functions. This toy model was constructed by defining the topology of its genes/proteins and the biological functions associated to it. There are several examples of these toy models that emulate natural processes to perform analysis of the virtual life in order to design the best strategy to understand real life. The strategy applied in this study combines topological and functional analysis integrating the knowledge about the relative position of a node among the others in the map with the conclusions generated by mathematical models that reproduce functional data of the virtual organism. Our results demonstrate that the combination of both strategies allows better understanding of our virtual organism even with the lower input of information needed and therefore it can be a potential tool to better understand the real life.
1101.4265
Kevin E. Cahill
Kevin Cahill
Models of Membrane Electrostatics
Minor changes, 12 pages, 8 figures
Physical Review E 85(5), 051921 (2012)
10.1103/PhysRevE.85.051921
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
I derive formulas for the electrostatic potential of a charge in or near a membrane modeled as one or more dielectric slabs lying between two semi-infinite dielectrics. One can use these formulas in Monte Carlo codes to compute the distribution of ions near cell membranes more accurately than by using Poisson-Boltzmann theory or its linearized version. Here I use them to discuss the electric field of a uniformly charged membrane, the image charges of an ion, the distribution of salt ions near a charged membrane, the energy of a zwitterion near a lipid slab, and the effect of including the phosphate head groups as thin layers of high electric permittivity.
[ { "created": "Sat, 22 Jan 2011 06:30:58 GMT", "version": "v1" }, { "created": "Fri, 17 Jun 2011 04:56:35 GMT", "version": "v2" }, { "created": "Tue, 18 Oct 2011 13:50:54 GMT", "version": "v3" }, { "created": "Wed, 19 Oct 2011 02:33:27 GMT", "version": "v4" }, { "c...
2015-05-27
[ [ "Cahill", "Kevin", "" ] ]
I derive formulas for the electrostatic potential of a charge in or near a membrane modeled as one or more dielectric slabs lying between two semi-infinite dielectrics. One can use these formulas in Monte Carlo codes to compute the distribution of ions near cell membranes more accurately than by using Poisson-Boltzmann theory or its linearized version. Here I use them to discuss the electric field of a uniformly charged membrane, the image charges of an ion, the distribution of salt ions near a charged membrane, the energy of a zwitterion near a lipid slab, and the effect of including the phosphate head groups as thin layers of high electric permittivity.
1908.05923
Josefine Bohr Brask
Josefine Bohr Brask and Jonatan Bohr Brask
Connected cooperators and Trojan horses: How correlations between cooperativeness and social connectedness affect the evolution of cooperation
13 pages, 6 figures
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cooperative behaviour constitutes a key aspect of both human society and non-human animal systems, but explaining how cooperation evolves represents a major scientific challenge. It is now well established that social network structure plays a central role for the viability of cooperation. However, not much is known about the importance of the positions of cooperators in the networks for the evolution of cooperation. Here, we investigate how cooperation is affected by correlations between cooperativeness and individual social connectedness. Using simulation models, we find that the effect of correlation between cooperativeness and connectedness (degree) depends on the social network structure, with positive effect in standard scale-free networks and no effect in standard Poisson networks. Furthermore, when degree assortativity is increased such that individuals cluster with others of similar social connectedness, we find that bridge areas between social clusters can act as barriers to the spread of defection, leading to strong enhancement of cooperation in particular in Poisson networks. But this effect is sensitive to the presence of Trojan horses (defectors placed within cooperator clusters). The study provides new knowledge about the conditions under which cooperation may evolve and persist, and the results are also relevant to consider in regard to human cooperation experiments.
[ { "created": "Fri, 16 Aug 2019 10:33:14 GMT", "version": "v1" }, { "created": "Mon, 27 Jan 2020 14:55:07 GMT", "version": "v2" }, { "created": "Mon, 8 Mar 2021 15:43:36 GMT", "version": "v3" } ]
2021-03-09
[ [ "Brask", "Josefine Bohr", "" ], [ "Brask", "Jonatan Bohr", "" ] ]
Cooperative behaviour constitutes a key aspect of both human society and non-human animal systems, but explaining how cooperation evolves represents a major scientific challenge. It is now well established that social network structure plays a central role for the viability of cooperation. However, not much is known about the importance of the positions of cooperators in the networks for the evolution of cooperation. Here, we investigate how cooperation is affected by correlations between cooperativeness and individual social connectedness. Using simulation models, we find that the effect of correlation between cooperativeness and connectedness (degree) depends on the social network structure, with positive effect in standard scale-free networks and no effect in standard Poisson networks. Furthermore, when degree assortativity is increased such that individuals cluster with others of similar social connectedness, we find that bridge areas between social clusters can act as barriers to the spread of defection, leading to strong enhancement of cooperation in particular in Poisson networks. But this effect is sensitive to the presence of Trojan horses (defectors placed within cooperator clusters). The study provides new knowledge about the conditions under which cooperation may evolve and persist, and the results are also relevant to consider in regard to human cooperation experiments.
1910.07263
Min Yan
Min Yan, Wen-Hao Zhang, He Wang, K. Y. Michael Wong
Bimodular continuous attractor neural networks with static and moving stimuli
15 pages, 11 figures, journal paper
Physical Review E, 107(6), 064302 (2023)
10.1103/PhysRevE.107.064302
null
q-bio.NC nlin.AO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigated the dynamical behaviors of bimodular continuous attractor neural networks, each processing a modality of sensory input and interacting with each other. We found that when bumps coexist in both modules, the position of each bump is shifted towards the other input when the intermodular couplings are excitatory and is shifted away when inhibitory. When one intermodular coupling is excitatory while another is moderately inhibitory, temporally modulated population spikes can be generated. On further increase of the inhibitory coupling, momentary spikes will emerge. In the regime of bump coexistence, bump heights are primarily strengthened by excitatory intermodular couplings, but there is a lesser weakening effect due to a bump being displaced from the direct input. When bimodular networks serve as decoders of multisensory integration, we extend the Bayesian framework to show that excitatory and inhibitory couplings encode attractive and repulsive priors, respectively. At low disparity, the bump positions decode the posterior means in the Bayesian framework, whereas at high disparity, multiple steady states exist. In the regime of multiple steady states, the less stable state can be accessed if the input causing the more stable state arrives after a sufficiently long delay. When one input is moving, the bump in the corresponding module is pinned when the moving stimulus is weak, unpinned at intermediate stimulus strength, and tracks the input at strong stimulus strength, and the stimulus strengths for these transitions increase with the velocity of the moving stimulus. These results are important to understanding multisensory integration of static and dynamic stimuli.
[ { "created": "Wed, 16 Oct 2019 10:13:49 GMT", "version": "v1" }, { "created": "Sun, 16 Jul 2023 08:31:00 GMT", "version": "v2" } ]
2023-07-18
[ [ "Yan", "Min", "" ], [ "Zhang", "Wen-Hao", "" ], [ "Wang", "He", "" ], [ "Wong", "K. Y. Michael", "" ] ]
We investigated the dynamical behaviors of bimodular continuous attractor neural networks, each processing a modality of sensory input and interacting with each other. We found that when bumps coexist in both modules, the position of each bump is shifted towards the other input when the intermodular couplings are excitatory and is shifted away when inhibitory. When one intermodular coupling is excitatory while another is moderately inhibitory, temporally modulated population spikes can be generated. On further increase of the inhibitory coupling, momentary spikes will emerge. In the regime of bump coexistence, bump heights are primarily strengthened by excitatory intermodular couplings, but there is a lesser weakening effect due to a bump being displaced from the direct input. When bimodular networks serve as decoders of multisensory integration, we extend the Bayesian framework to show that excitatory and inhibitory couplings encode attractive and repulsive priors, respectively. At low disparity, the bump positions decode the posterior means in the Bayesian framework, whereas at high disparity, multiple steady states exist. In the regime of multiple steady states, the less stable state can be accessed if the input causing the more stable state arrives after a sufficiently long delay. When one input is moving, the bump in the corresponding module is pinned when the moving stimulus is weak, unpinned at intermediate stimulus strength, and tracks the input at strong stimulus strength, and the stimulus strengths for these transitions increase with the velocity of the moving stimulus. These results are important to understanding multisensory integration of static and dynamic stimuli.
2403.00716
Leandro Okimoto
Leandro Y. S. Okimoto, Rayol Mendonca-Neto, Fab\'iola G. Nakamura, Eduardo F. Nakamura, David Feny\"o and Claudio T. Silva
Few-shot genes selection: subset of PAM50 genes for breast cancer subtypes classification
null
BMC Bioinformatics 25, 92 (2024)
10.1186/s12859-024-05715-8
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Background: In recent years, researchers have made significant strides in understanding the heterogeneity of breast cancer and its various subtypes. However, the wealth of genomic and proteomic data available today necessitates efficient frameworks, instruments, and computational tools for meaningful analysis. Despite its success as a prognostic tool, the PAM50 gene signature's reliance on many genes presents challenges in terms of cost and complexity. Consequently, there is a need for more efficient methods to classify breast cancer subtypes using a reduced gene set accurately. Results: This study explores the potential of achieving precise breast cancer subtype categorization using a reduced gene set derived from the PAM50 gene signature. By employing a "Few-Shot Genes Selection" method, we randomly select smaller subsets from PAM50 and evaluate their performance using metrics and a linear model, specifically the Support Vector Machine (SVM) classifier. In addition, we aim to assess whether a more compact gene set can maintain performance while simplifying the classification process. Our findings demonstrate that certain reduced gene subsets can perform comparable or superior to the full PAM50 gene signature. Conclusions: The identified gene subsets, with 36 genes, have the potential to contribute to the development of more cost-effective and streamlined diagnostic tools in breast cancer research and clinical settings.
[ { "created": "Fri, 1 Mar 2024 18:04:54 GMT", "version": "v1" } ]
2024-03-04
[ [ "Okimoto", "Leandro Y. S.", "" ], [ "Mendonca-Neto", "Rayol", "" ], [ "Nakamura", "Fabíola G.", "" ], [ "Nakamura", "Eduardo F.", "" ], [ "Fenyö", "David", "" ], [ "Silva", "Claudio T.", "" ] ]
Background: In recent years, researchers have made significant strides in understanding the heterogeneity of breast cancer and its various subtypes. However, the wealth of genomic and proteomic data available today necessitates efficient frameworks, instruments, and computational tools for meaningful analysis. Despite its success as a prognostic tool, the PAM50 gene signature's reliance on many genes presents challenges in terms of cost and complexity. Consequently, there is a need for more efficient methods to classify breast cancer subtypes using a reduced gene set accurately. Results: This study explores the potential of achieving precise breast cancer subtype categorization using a reduced gene set derived from the PAM50 gene signature. By employing a "Few-Shot Genes Selection" method, we randomly select smaller subsets from PAM50 and evaluate their performance using metrics and a linear model, specifically the Support Vector Machine (SVM) classifier. In addition, we aim to assess whether a more compact gene set can maintain performance while simplifying the classification process. Our findings demonstrate that certain reduced gene subsets can perform comparable or superior to the full PAM50 gene signature. Conclusions: The identified gene subsets, with 36 genes, have the potential to contribute to the development of more cost-effective and streamlined diagnostic tools in breast cancer research and clinical settings.
2003.11668
Marc Howard
Marc W. Howard and Michael E. Hasselmo
Cognitive computation using neural representations of time and space in the Laplace domain
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Memory for the past makes use of a record of what happened when---a function over past time. Time cells in the hippocampus and temporal context cells in the entorhinal cortex both code for events as a function of past time, but with very different receptive fields. Time cells in the hippocampus can be understood as a compressed estimate of events as a function of the past. Temporal context cells in the entorhinal cortex can be understood as the Laplace transform of that function, respectively. Other functional cell types in the hippocampus and related regions, including border cells, place cells, trajectory coding, splitter cells, can be understood as coding for functions over space or past movements or their Laplace transforms. More abstract quantities, like distance in an abstract conceptual space or numerosity could also be mapped onto populations of neurons coding for the Laplace transform of functions over those variables. Quantitative cognitive models of memory and evidence accumulation can also be specified in this framework allowing constraints from both behavior and neurophysiology. More generally, the computational power of the Laplace domain could be important for efficiently implementing data-independent operators, which could serve as a basis for neural models of a very broad range of cognitive computations.
[ { "created": "Wed, 25 Mar 2020 22:40:49 GMT", "version": "v1" } ]
2020-03-27
[ [ "Howard", "Marc W.", "" ], [ "Hasselmo", "Michael E.", "" ] ]
Memory for the past makes use of a record of what happened when---a function over past time. Time cells in the hippocampus and temporal context cells in the entorhinal cortex both code for events as a function of past time, but with very different receptive fields. Time cells in the hippocampus can be understood as a compressed estimate of events as a function of the past. Temporal context cells in the entorhinal cortex can be understood as the Laplace transform of that function, respectively. Other functional cell types in the hippocampus and related regions, including border cells, place cells, trajectory coding, splitter cells, can be understood as coding for functions over space or past movements or their Laplace transforms. More abstract quantities, like distance in an abstract conceptual space or numerosity could also be mapped onto populations of neurons coding for the Laplace transform of functions over those variables. Quantitative cognitive models of memory and evidence accumulation can also be specified in this framework allowing constraints from both behavior and neurophysiology. More generally, the computational power of the Laplace domain could be important for efficiently implementing data-independent operators, which could serve as a basis for neural models of a very broad range of cognitive computations.
2407.19066
Xuesong Bai
Xuesong Bai, Thomas G. Fai
Stochastic Gene Expression Model of Nuclear-to-Cell Ratio Homeostasis
null
null
null
null
q-bio.CB
http://creativecommons.org/licenses/by/4.0/
Cell size varies between different cell types, and between different growth and osmotic conditions. However, the nuclear-to-cell volume ratio (N/C ratio) remains nearly constant. In this paper, we build on existing deterministic models of N/C ratio homeostasis and develop a simplified gene translation model to study the effect of stochasticity on the N/C ratio homeostasis. We solve the corresponding chemical master equation and obtain the mean and variance of the N/C ratio. We also use a Taylor expansion approximation to study the effects of the system size on the fluctuations of the N/C ratio. We then combine the translation model with a cell division model to study the effects of extrinsic noises from cell division on the N/C ratio. Our model demonstrates that the N/C ratio homeostasis is maintained when the stochasticity in cell growth is taken into account, that the N/C ratio is largely determined by the gene fraction of nuclear proteins, and that the fluctuations in the N/C ratio diminish as the system size increases.
[ { "created": "Fri, 26 Jul 2024 20:17:17 GMT", "version": "v1" } ]
2024-07-30
[ [ "Bai", "Xuesong", "" ], [ "Fai", "Thomas G.", "" ] ]
Cell size varies between different cell types, and between different growth and osmotic conditions. However, the nuclear-to-cell volume ratio (N/C ratio) remains nearly constant. In this paper, we build on existing deterministic models of N/C ratio homeostasis and develop a simplified gene translation model to study the effect of stochasticity on the N/C ratio homeostasis. We solve the corresponding chemical master equation and obtain the mean and variance of the N/C ratio. We also use a Taylor expansion approximation to study the effects of the system size on the fluctuations of the N/C ratio. We then combine the translation model with a cell division model to study the effects of extrinsic noises from cell division on the N/C ratio. Our model demonstrates that the N/C ratio homeostasis is maintained when the stochasticity in cell growth is taken into account, that the N/C ratio is largely determined by the gene fraction of nuclear proteins, and that the fluctuations in the N/C ratio diminish as the system size increases.
2308.11809
Shirui Chen
Shirui Chen, Linxing Preston Jiang, Rajesh P. N. Rao, Eric Shea-Brown
Expressive probabilistic sampling in recurrent neural networks
null
null
null
null
q-bio.NC cs.AI cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In sampling-based Bayesian models of brain function, neural activities are assumed to be samples from probability distributions that the brain uses for probabilistic computation. However, a comprehensive understanding of how mechanistic models of neural dynamics can sample from arbitrary distributions is still lacking. We use tools from functional analysis and stochastic differential equations to explore the minimum architectural requirements for $\textit{recurrent}$ neural circuits to sample from complex distributions. We first consider the traditional sampling model consisting of a network of neurons whose outputs directly represent the samples (sampler-only network). We argue that synaptic current and firing-rate dynamics in the traditional model have limited capacity to sample from a complex probability distribution. We show that the firing rate dynamics of a recurrent neural circuit with a separate set of output units can sample from an arbitrary probability distribution. We call such circuits reservoir-sampler networks (RSNs). We propose an efficient training procedure based on denoising score matching that finds recurrent and output weights such that the RSN implements Langevin sampling. We empirically demonstrate our model's ability to sample from several complex data distributions using the proposed neural dynamics and discuss its applicability to developing the next generation of sampling-based brain models.
[ { "created": "Tue, 22 Aug 2023 22:20:39 GMT", "version": "v1" }, { "created": "Wed, 1 Nov 2023 20:37:34 GMT", "version": "v2" }, { "created": "Tue, 14 Nov 2023 21:07:33 GMT", "version": "v3" } ]
2023-11-16
[ [ "Chen", "Shirui", "" ], [ "Jiang", "Linxing Preston", "" ], [ "Rao", "Rajesh P. N.", "" ], [ "Shea-Brown", "Eric", "" ] ]
In sampling-based Bayesian models of brain function, neural activities are assumed to be samples from probability distributions that the brain uses for probabilistic computation. However, a comprehensive understanding of how mechanistic models of neural dynamics can sample from arbitrary distributions is still lacking. We use tools from functional analysis and stochastic differential equations to explore the minimum architectural requirements for $\textit{recurrent}$ neural circuits to sample from complex distributions. We first consider the traditional sampling model consisting of a network of neurons whose outputs directly represent the samples (sampler-only network). We argue that synaptic current and firing-rate dynamics in the traditional model have limited capacity to sample from a complex probability distribution. We show that the firing rate dynamics of a recurrent neural circuit with a separate set of output units can sample from an arbitrary probability distribution. We call such circuits reservoir-sampler networks (RSNs). We propose an efficient training procedure based on denoising score matching that finds recurrent and output weights such that the RSN implements Langevin sampling. We empirically demonstrate our model's ability to sample from several complex data distributions using the proposed neural dynamics and discuss its applicability to developing the next generation of sampling-based brain models.
2312.16624
Luca Cattelani
Luca Cattelani and Vittorio Fortino
Dual-stage optimizer for systematic overestimation adjustment applied to multi-objective genetic algorithms for biomarker selection
Added link to source code repository
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The challenge in biomarker discovery using machine learning from omics data lies in the abundance of molecular features but scarcity of samples. Most feature selection methods in machine learning require evaluating various sets of features (models) to determine the most effective combination. This process, typically conducted using a validation dataset, involves testing different feature sets to optimize the model's performance. Evaluations have performance estimation error and when the selection involves many models the best ones are almost certainly overestimated. Biomarker identification with feature selection methods can be addressed as a multi-objective problem with trade-offs between predictive ability and parsimony in the number of features. Genetic algorithms are a popular tool for multi-objective optimization but they evolve numerous solutions thus are prone to overestimation. Methods have been proposed to reduce the overestimation after a model has already been selected in single-objective problems, but no algorithm existed capable of reducing the overestimation during the optimization, improving model selection, or applied in the more general multi-objective domain. We propose DOSA-MO, a novel multi-objective optimization wrapper algorithm that learns how the original estimation, its variance, and the feature set size of the solutions predict the overestimation. DOSA-MO adjusts the expectation of the performance during the optimization, improving the composition of the solution set. We verify that DOSA-MO improves the performance of a state-of-the-art genetic algorithm on left-out or external sample sets, when predicting cancer subtypes and/or patient overall survival, using three transcriptomics datasets for kidney and breast cancer.
[ { "created": "Wed, 27 Dec 2023 16:13:14 GMT", "version": "v1" }, { "created": "Tue, 6 Feb 2024 14:57:31 GMT", "version": "v2" }, { "created": "Thu, 29 Feb 2024 15:40:34 GMT", "version": "v3" } ]
2024-03-01
[ [ "Cattelani", "Luca", "" ], [ "Fortino", "Vittorio", "" ] ]
The challenge in biomarker discovery using machine learning from omics data lies in the abundance of molecular features but scarcity of samples. Most feature selection methods in machine learning require evaluating various sets of features (models) to determine the most effective combination. This process, typically conducted using a validation dataset, involves testing different feature sets to optimize the model's performance. Evaluations have performance estimation error and when the selection involves many models the best ones are almost certainly overestimated. Biomarker identification with feature selection methods can be addressed as a multi-objective problem with trade-offs between predictive ability and parsimony in the number of features. Genetic algorithms are a popular tool for multi-objective optimization but they evolve numerous solutions thus are prone to overestimation. Methods have been proposed to reduce the overestimation after a model has already been selected in single-objective problems, but no algorithm existed capable of reducing the overestimation during the optimization, improving model selection, or applied in the more general multi-objective domain. We propose DOSA-MO, a novel multi-objective optimization wrapper algorithm that learns how the original estimation, its variance, and the feature set size of the solutions predict the overestimation. DOSA-MO adjusts the expectation of the performance during the optimization, improving the composition of the solution set. We verify that DOSA-MO improves the performance of a state-of-the-art genetic algorithm on left-out or external sample sets, when predicting cancer subtypes and/or patient overall survival, using three transcriptomics datasets for kidney and breast cancer.
2404.11761
Jinzhi Lei
Yakun Li, Xiyin Liang, Jinzhi Lei
A computational scheme connecting gene regulatory network dynamics with heterogeneous stem cell regeneration
27 pages, 9 figures
null
null
null
q-bio.MN q-bio.CB
http://creativecommons.org/licenses/by-nc-sa/4.0/
Stem cell regeneration is a vital biological process in self-renewing tissues, governing development and tissue homeostasis. Gene regulatory network dynamics are pivotal in controlling stem cell regeneration and cell type transitions. However, integrating the quantitative dynamics of gene regulatory networks at the single-cell level with stem cell regeneration at the population level poses significant challenges. This study presents a computational framework connecting gene regulatory network dynamics with stem cell regeneration through a data-driven formulation of the inheritance function. The inheritance function captures epigenetic state transitions during cell division in heterogeneous stem cell populations. Our scheme allows the derivation of the inheritance function based on a hybrid model of cross-cell-cycle gene regulation network dynamics. The proposed scheme enables us to derive the inheritance function based on the hybrid model of cross-cell-cycle gene regulation network dynamics. By explicitly incorporating gene regulatory network structure, it replicates cross-cell-cycling gene regulation dynamics through individual-cell-based modeling. The numerical scheme holds the potential for extension to diverse gene regulatory networks, facilitating a deeper understanding of the connection between gene regulation dynamics and stem cell regeneration.
[ { "created": "Wed, 17 Apr 2024 21:38:42 GMT", "version": "v1" } ]
2024-04-19
[ [ "Li", "Yakun", "" ], [ "Liang", "Xiyin", "" ], [ "Lei", "Jinzhi", "" ] ]
Stem cell regeneration is a vital biological process in self-renewing tissues, governing development and tissue homeostasis. Gene regulatory network dynamics are pivotal in controlling stem cell regeneration and cell type transitions. However, integrating the quantitative dynamics of gene regulatory networks at the single-cell level with stem cell regeneration at the population level poses significant challenges. This study presents a computational framework connecting gene regulatory network dynamics with stem cell regeneration through a data-driven formulation of the inheritance function. The inheritance function captures epigenetic state transitions during cell division in heterogeneous stem cell populations. Our scheme allows the derivation of the inheritance function based on a hybrid model of cross-cell-cycle gene regulation network dynamics. The proposed scheme enables us to derive the inheritance function based on the hybrid model of cross-cell-cycle gene regulation network dynamics. By explicitly incorporating gene regulatory network structure, it replicates cross-cell-cycling gene regulation dynamics through individual-cell-based modeling. The numerical scheme holds the potential for extension to diverse gene regulatory networks, facilitating a deeper understanding of the connection between gene regulation dynamics and stem cell regeneration.
2012.06222
Laurent Janniere
Steff Horemans, Matthaios Pitoulias, Alexandria Holland, Panos Soultanas (UON), Laurent Janniere (UMR 8030)
Glycolytic pyruvate kinase moonlighting activities in DNA replication initiation and elongation
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cells have evolved a metabolic control of DNA replication to respond to a wide range of nutritional conditions. Accumulating data suggest that this poorly understood control depends, at least in part, on Central Carbon Metabolism (CCM). In Bacillus subtilis , the glycolytic pyruvate kinase (PykA) is intricately linked to replication. This 585 amino-acid-long enzyme comprises a catalytic (Cat) domain that binds to phosphoenolpyruvate (PEP) and ADP to produce pyruvate and ATP, and a C-terminal domain of unknown function. Interestingly, the C-terminal domain termed PEPut interacts with Cat and is homologous a domain that, in other metabolic enzymes, are phosphorylated at a conserved TSH motif at the expense of PEP and ATP to drive sugar import and catalytic or regulatory activities. To gain insights into the role of PykA in replication, DNA synthesis was analyzed in various Cat and PEPut mutants grown in a medium where the metabolic activity of PykA is dispensable for growth. Measurements of replication parameters ( ori/ter ratio, C period and fork speed) and of the pyruvate kinase activity showed that PykA mutants exhibit replication defects resulting from side chain modifications in the PykA protein rather than from a reduction of its metabolic activity. Interestingly, Cat and PEPut have distinct commitments in replication: while Cat impacts positively and negatively replication fork speed, PEPut stimulates initiation through a process depending on Cat-PEPut interaction and growth conditions. Residues binding to PEP and ADP in Cat, stabilizing the Cat-PEPut interaction and belonging to the TSH motif of PEPut were found important for the commitment of PykA in replication. In vitro , PykA affects the activities of replication enzymes (the polymerase DnaE, helicase DnaC and primase DnaG) essential for initiation and elongation and genetically linked to pykA . Our results thus connect replication initiation and elongation to CCM metabolites (PEP, ATP and ADP), critical Cat and PEPut residues and to multiple links between PykA and the replication enzymes DnaE, DnaC and DnaG. We propose that PykA is endowed with a moonlighting activity that senses the concentration of signaling metabolites and interacts with replication enzymes to convey information on the cellular metabolic state to the replication machinery and adjust replication initiation and elongation to metabolism. This defines a new type of replication regulator proposed to be part of the metabolic control that gates replication in the cell cycle.
[ { "created": "Fri, 11 Dec 2020 10:10:33 GMT", "version": "v1" } ]
2020-12-14
[ [ "Horemans", "Steff", "", "UON" ], [ "Pitoulias", "Matthaios", "", "UON" ], [ "Holland", "Alexandria", "", "UON" ], [ "Soultanas", "Panos", "", "UON" ], [ "Janniere", "Laurent", "", "UMR 8030" ] ]
Cells have evolved a metabolic control of DNA replication to respond to a wide range of nutritional conditions. Accumulating data suggest that this poorly understood control depends, at least in part, on Central Carbon Metabolism (CCM). In Bacillus subtilis , the glycolytic pyruvate kinase (PykA) is intricately linked to replication. This 585 amino-acid-long enzyme comprises a catalytic (Cat) domain that binds to phosphoenolpyruvate (PEP) and ADP to produce pyruvate and ATP, and a C-terminal domain of unknown function. Interestingly, the C-terminal domain termed PEPut interacts with Cat and is homologous a domain that, in other metabolic enzymes, are phosphorylated at a conserved TSH motif at the expense of PEP and ATP to drive sugar import and catalytic or regulatory activities. To gain insights into the role of PykA in replication, DNA synthesis was analyzed in various Cat and PEPut mutants grown in a medium where the metabolic activity of PykA is dispensable for growth. Measurements of replication parameters ( ori/ter ratio, C period and fork speed) and of the pyruvate kinase activity showed that PykA mutants exhibit replication defects resulting from side chain modifications in the PykA protein rather than from a reduction of its metabolic activity. Interestingly, Cat and PEPut have distinct commitments in replication: while Cat impacts positively and negatively replication fork speed, PEPut stimulates initiation through a process depending on Cat-PEPut interaction and growth conditions. Residues binding to PEP and ADP in Cat, stabilizing the Cat-PEPut interaction and belonging to the TSH motif of PEPut were found important for the commitment of PykA in replication. In vitro , PykA affects the activities of replication enzymes (the polymerase DnaE, helicase DnaC and primase DnaG) essential for initiation and elongation and genetically linked to pykA . Our results thus connect replication initiation and elongation to CCM metabolites (PEP, ATP and ADP), critical Cat and PEPut residues and to multiple links between PykA and the replication enzymes DnaE, DnaC and DnaG. We propose that PykA is endowed with a moonlighting activity that senses the concentration of signaling metabolites and interacts with replication enzymes to convey information on the cellular metabolic state to the replication machinery and adjust replication initiation and elongation to metabolism. This defines a new type of replication regulator proposed to be part of the metabolic control that gates replication in the cell cycle.
2402.03823
Alexander Yermanos
Andreas Dounas, Tudor-Stefan Cotet, Alexander Yermanos
Learning immune receptor representations with protein language models
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Protein language models (PLMs) learn contextual representations from protein sequences and are profoundly impacting various scientific disciplines spanning protein design, drug discovery, and structural predictions. One particular research area where PLMs have gained considerable attention is adaptive immune receptors, whose tremendous sequence diversity dictates the functional recognition of the adaptive immune system. The self-supervised nature underlying the training of PLMs has been recently leveraged to implement a variety of immune receptor-specific PLMs. These models have demonstrated promise in tasks such as predicting antigen-specificity and structure, computationally engineering therapeutic antibodies, and diagnostics. However, challenges including insufficient training data and considerations related to model architecture, training strategies, and data and model availability must be addressed before fully unlocking the potential of PLMs in understanding, translating, and engineering immune receptors.
[ { "created": "Tue, 6 Feb 2024 09:10:44 GMT", "version": "v1" } ]
2024-02-07
[ [ "Dounas", "Andreas", "" ], [ "Cotet", "Tudor-Stefan", "" ], [ "Yermanos", "Alexander", "" ] ]
Protein language models (PLMs) learn contextual representations from protein sequences and are profoundly impacting various scientific disciplines spanning protein design, drug discovery, and structural predictions. One particular research area where PLMs have gained considerable attention is adaptive immune receptors, whose tremendous sequence diversity dictates the functional recognition of the adaptive immune system. The self-supervised nature underlying the training of PLMs has been recently leveraged to implement a variety of immune receptor-specific PLMs. These models have demonstrated promise in tasks such as predicting antigen-specificity and structure, computationally engineering therapeutic antibodies, and diagnostics. However, challenges including insufficient training data and considerations related to model architecture, training strategies, and data and model availability must be addressed before fully unlocking the potential of PLMs in understanding, translating, and engineering immune receptors.
1603.00397
Carlos Eduardo Cardoso Galhardo
C.E.C. Galhardo, B. C. Coutinho, T.J.P.Penna, M.A. de Menezes and P.P.S. Soares
A Langevin model for complex cardiological time series
null
null
null
null
q-bio.NC cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There has been considerable efforts to understand the underlying complex dynamics in physiological time series. Methods originated from statistical physics revealed a non-Gaussian statistics and long range correlations in those signals. This suggests that the regulatory system operates out of equilibrium. Herein the complex fluctuations in blood pressure time series were successful described by physiological motivated Langevin equation under a sigmoid restoring force with multiplicative noise.
[ { "created": "Mon, 29 Feb 2016 18:44:23 GMT", "version": "v1" } ]
2016-03-02
[ [ "Galhardo", "C. E. C.", "" ], [ "Coutinho", "B. C.", "" ], [ "Penna", "T. J. P.", "" ], [ "de Menezes", "M. A.", "" ], [ "Soares", "P. P. S.", "" ] ]
There has been considerable efforts to understand the underlying complex dynamics in physiological time series. Methods originated from statistical physics revealed a non-Gaussian statistics and long range correlations in those signals. This suggests that the regulatory system operates out of equilibrium. Herein the complex fluctuations in blood pressure time series were successful described by physiological motivated Langevin equation under a sigmoid restoring force with multiplicative noise.
q-bio/0612016
Michael Desai
Michael M. Desai, Daniel S. Fisher
Beneficial mutation-selection balance and the effect of linkage on positive selection
7 Figures, submitted to Genetics
null
null
null
q-bio.PE q-bio.GN
null
When beneficial mutations are rare, they accumulate by a series of selective sweeps. But when they are common, many beneficial mutations will occur before any can fix, so there will be many different mutant lineages in the population concurrently. In an asexual population, these different mutant lineages interfere and not all can fix simultaneously. In addition, further beneficial mutations can accumulate in mutant lineages while these are still a minority of the population. In this paper, we analyze the dynamics of such multiple mutations and the interplay between multiple mutations and interference between clones. These result in substantial variation in fitness accumulating within a single asexual population. The amount of variation is determined by a balance between selection, which destroys variation, and beneficial mutations, which create more. The behavior depends in a subtle way on the population parameters: the population size, the beneficial mutation rate, and the distribution of the fitness increments of the potential beneficial mutations. The mutation-selection balance leads to a continually evolving population with a steady-state fitness variation. This variation increases logarithmically with both population size and mutation rate and sets the rate at which the population accumulates beneficial mutations, which thus also grows only logarithmically with population size and mutation rate. These results imply that mutator phenotypes are less effective in larger asexual populations. They also have consequences for the advantages (or disadvantages) of sex via the Fisher-Muller effect; these are discussed briefly.
[ { "created": "Mon, 11 Dec 2006 00:01:40 GMT", "version": "v1" } ]
2007-05-23
[ [ "Desai", "Michael M.", "" ], [ "Fisher", "Daniel S.", "" ] ]
When beneficial mutations are rare, they accumulate by a series of selective sweeps. But when they are common, many beneficial mutations will occur before any can fix, so there will be many different mutant lineages in the population concurrently. In an asexual population, these different mutant lineages interfere and not all can fix simultaneously. In addition, further beneficial mutations can accumulate in mutant lineages while these are still a minority of the population. In this paper, we analyze the dynamics of such multiple mutations and the interplay between multiple mutations and interference between clones. These result in substantial variation in fitness accumulating within a single asexual population. The amount of variation is determined by a balance between selection, which destroys variation, and beneficial mutations, which create more. The behavior depends in a subtle way on the population parameters: the population size, the beneficial mutation rate, and the distribution of the fitness increments of the potential beneficial mutations. The mutation-selection balance leads to a continually evolving population with a steady-state fitness variation. This variation increases logarithmically with both population size and mutation rate and sets the rate at which the population accumulates beneficial mutations, which thus also grows only logarithmically with population size and mutation rate. These results imply that mutator phenotypes are less effective in larger asexual populations. They also have consequences for the advantages (or disadvantages) of sex via the Fisher-Muller effect; these are discussed briefly.
2001.06718
Christoph Leitner
Christoph Leitner, Christian Baumgartner, Christian Peham and Markus Tilp
Ultrasound in Locomotion Research -- The Quest for Wider Views
Accepted for publication to CAMS-Knee OpenSim 2020, (4 pages, 2 figures, 1 table)
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a systematic review, we investigate current applications of ultrasound in locomotion research. Shortcomings in the range of view of ultrasound systems affect the direct validation of musculoskeletal simulations as inverse approaches have to be applied. We present currently used methods to estimate muscle and tendon length in human plantarflexors.
[ { "created": "Sat, 18 Jan 2020 19:51:54 GMT", "version": "v1" } ]
2020-01-22
[ [ "Leitner", "Christoph", "" ], [ "Baumgartner", "Christian", "" ], [ "Peham", "Christian", "" ], [ "Tilp", "Markus", "" ] ]
In a systematic review, we investigate current applications of ultrasound in locomotion research. Shortcomings in the range of view of ultrasound systems affect the direct validation of musculoskeletal simulations as inverse approaches have to be applied. We present currently used methods to estimate muscle and tendon length in human plantarflexors.
1705.08217
Juan Seoane-Sepulveda
Per H. Enflo, Gustavo A. Mu\~noz-Fern\'andez, Juan B. Seoane-Sep\'ulveda
A simple unified explanation of several genetic issues on today's human population and on archaic humans
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We will give a simple, unified, possible explanation of several debated genetic issues on today's humans, Neandertals and Denisovans. In particular it is shown by means of a simple mathematical model why there is little genetic variation in todays's human population or in Western Neandertal population, why all mtDNA and y-chromosomes in today's humans seem to have African origin with no trace of Neandertal nor Denosovan mtDNA or y-chromosomes, why a big part of the European gene pool is young (from Neolitic time), and why today's East Asians have mode Neandertal genes than today's Europeans.
[ { "created": "Tue, 23 May 2017 12:49:18 GMT", "version": "v1" }, { "created": "Mon, 22 Mar 2021 15:09:58 GMT", "version": "v2" }, { "created": "Sun, 25 Jul 2021 16:36:17 GMT", "version": "v3" } ]
2021-07-27
[ [ "Enflo", "Per H.", "" ], [ "Muñoz-Fernández", "Gustavo A.", "" ], [ "Seoane-Sepúlveda", "Juan B.", "" ] ]
We will give a simple, unified, possible explanation of several debated genetic issues on today's humans, Neandertals and Denisovans. In particular it is shown by means of a simple mathematical model why there is little genetic variation in todays's human population or in Western Neandertal population, why all mtDNA and y-chromosomes in today's humans seem to have African origin with no trace of Neandertal nor Denosovan mtDNA or y-chromosomes, why a big part of the European gene pool is young (from Neolitic time), and why today's East Asians have mode Neandertal genes than today's Europeans.
2312.12525
Josiah Couch
Josiah Couch, Rohit Arora, Jasper Braun, Joesph Kaplinsky, Elliot Hill, Anthony Li, Brett Altschul, Ramy Arnaout
Scaling Monte-Carlo-Based Inference on Antibody and TCR Repertoires
11 pages, 2 tables, 4 figures
null
null
null
q-bio.QM q-bio.BM q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Previously, it has been shown that maximum-entropy models of immune-repertoire sequence can be used to determine a person's vaccination status. However, this approach has the drawback of requiring a computationally intensive method to compute each model's partition function ($Z$), the normalization constant required for calculating the probability that the model will generate a given sequence. Specifically, the method required generating approximately $10^{10}$ sequences via Monte-Carlo simulations for each model. This is impractical for large numbers of models. Here we propose an alternative method that requires estimating $Z$ this way for only a few models: it then uses these expensive estimates to estimate $Z$ more efficiently for the remaining models. We demonstrate that this new method enables the generation of accurate estimates for 27 models using only three expensive estimates, thereby reducing the computational cost by an order of magnitude. Importantly, this gain in efficiency is achieved with only minimal impact on classification accuracy. Thus, this new method enables larger-scale investigations in computational immunology and represents a useful contribution to energy-based modeling more generally.
[ { "created": "Tue, 19 Dec 2023 19:01:27 GMT", "version": "v1" } ]
2023-12-21
[ [ "Couch", "Josiah", "" ], [ "Arora", "Rohit", "" ], [ "Braun", "Jasper", "" ], [ "Kaplinsky", "Joesph", "" ], [ "Hill", "Elliot", "" ], [ "Li", "Anthony", "" ], [ "Altschul", "Brett", "" ], [ "Arnaou...
Previously, it has been shown that maximum-entropy models of immune-repertoire sequence can be used to determine a person's vaccination status. However, this approach has the drawback of requiring a computationally intensive method to compute each model's partition function ($Z$), the normalization constant required for calculating the probability that the model will generate a given sequence. Specifically, the method required generating approximately $10^{10}$ sequences via Monte-Carlo simulations for each model. This is impractical for large numbers of models. Here we propose an alternative method that requires estimating $Z$ this way for only a few models: it then uses these expensive estimates to estimate $Z$ more efficiently for the remaining models. We demonstrate that this new method enables the generation of accurate estimates for 27 models using only three expensive estimates, thereby reducing the computational cost by an order of magnitude. Importantly, this gain in efficiency is achieved with only minimal impact on classification accuracy. Thus, this new method enables larger-scale investigations in computational immunology and represents a useful contribution to energy-based modeling more generally.
1309.5840
Huilin Zhu
Huilin Zhu, Yuebo Fan, Huan Guo, Dan Huang and Sailing He
Reduced interhemispheric functional connectivity of children with autism: evidence from functional near infrared spectroscopy studies
7 pages, 3 figures, fields: autism spectrum disorder, near inferred spectroscopy, functional connectivity
null
null
null
q-bio.NC physics.med-ph physics.optics
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Autism spectrum disorder is a neuro-developmental disorder characterized by abnormalities of neural synchronization. In this study, functional near infrared spectroscopy (fNIRS) is used to study the difference in functional connectivity in left and right inferior frontal cortices (IFC) and temporal cortices (TC) between autistic and typically developing children between 8-11 years of age. 10 autistic children and 10 typical ones were recruited in our study for 8-min resting state measurement. Results show that the overall interhemispheric correlation of HbO was significantly lower in autistic children than in the controls. In particular, reduced connectivity was found to be most significant in TC area of autism. Autistic children lose the symmetry in the patterns of correlation maps. These results suggest the feasibility of using the fNIRS method to assess abnormal functional connectivity of the autistic brain and its potential application in autism diagnosis.
[ { "created": "Mon, 23 Sep 2013 15:21:59 GMT", "version": "v1" } ]
2013-09-24
[ [ "Zhu", "Huilin", "" ], [ "Fan", "Yuebo", "" ], [ "Guo", "Huan", "" ], [ "Huang", "Dan", "" ], [ "He", "Sailing", "" ] ]
Autism spectrum disorder is a neuro-developmental disorder characterized by abnormalities of neural synchronization. In this study, functional near infrared spectroscopy (fNIRS) is used to study the difference in functional connectivity in left and right inferior frontal cortices (IFC) and temporal cortices (TC) between autistic and typically developing children between 8-11 years of age. 10 autistic children and 10 typical ones were recruited in our study for 8-min resting state measurement. Results show that the overall interhemispheric correlation of HbO was significantly lower in autistic children than in the controls. In particular, reduced connectivity was found to be most significant in TC area of autism. Autistic children lose the symmetry in the patterns of correlation maps. These results suggest the feasibility of using the fNIRS method to assess abnormal functional connectivity of the autistic brain and its potential application in autism diagnosis.
2202.02520
Maik Sch\"unemann
Maik Sch\"unemann, Udo Ernst and Marc Kesseb\"ohmer
A rigorous stochastic theory for spike pattern formation in recurrent neural networks with arbitrary connection topologies
88 pages, 7 figures
null
null
null
q-bio.NC math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cortical networks exhibit synchronized activity which often occurs in spontaneous events in the form of spike avalanches. Since synchronization has been causally linked to central aspects of brain function such as selective signal processing and integration of stimulus information, participating in an avalanche is a form of a transient synchrony which temporarily creates neural assemblies and hence might especially be useful for implementing flexible information processing. For understanding how assembly formation supports neural computation, it is therefore essential to establish a comprehensive theory of how network structure and dynamics interact to generate specific avalanche patterns and sequences. Here we derive exact avalanche distributions for a finite network of recurrently coupled spiking neurons with arbitrary non-negative interaction weights, which is made possible by formally mapping the model dynamics to a linear, random dynamical system on the $N$-torus and by exploiting self-similarities inherent in the phase space. We introduce the notion of relative unique ergodicity and show that this property is guaranteed if the system is driven by a time-invariant Bernoulli process. This approach allows us not only to provide closed-form analytical expressions for avalanche size, but also to determine the detailed set(s) of units firing in an avalanche (i.e., the avalanche assembly). The underlying dependence between network structure and dynamics is made transparent by expressing the distribution of avalanche assemblies in terms of the induced graph Laplacian. We explore analytical consequences of this dependence and provide illustrating examples.
[ { "created": "Sat, 5 Feb 2022 09:28:40 GMT", "version": "v1" } ]
2022-02-08
[ [ "Schünemann", "Maik", "" ], [ "Ernst", "Udo", "" ], [ "Kesseböhmer", "Marc", "" ] ]
Cortical networks exhibit synchronized activity which often occurs in spontaneous events in the form of spike avalanches. Since synchronization has been causally linked to central aspects of brain function such as selective signal processing and integration of stimulus information, participating in an avalanche is a form of a transient synchrony which temporarily creates neural assemblies and hence might especially be useful for implementing flexible information processing. For understanding how assembly formation supports neural computation, it is therefore essential to establish a comprehensive theory of how network structure and dynamics interact to generate specific avalanche patterns and sequences. Here we derive exact avalanche distributions for a finite network of recurrently coupled spiking neurons with arbitrary non-negative interaction weights, which is made possible by formally mapping the model dynamics to a linear, random dynamical system on the $N$-torus and by exploiting self-similarities inherent in the phase space. We introduce the notion of relative unique ergodicity and show that this property is guaranteed if the system is driven by a time-invariant Bernoulli process. This approach allows us not only to provide closed-form analytical expressions for avalanche size, but also to determine the detailed set(s) of units firing in an avalanche (i.e., the avalanche assembly). The underlying dependence between network structure and dynamics is made transparent by expressing the distribution of avalanche assemblies in terms of the induced graph Laplacian. We explore analytical consequences of this dependence and provide illustrating examples.
1705.05570
Damien Chablat
Jing Chang (IRCCyN), Damien Chablat (IRCCyN), Fouad Bennis (IRCCyN), Liang Ma
Muscle Fatigue Analysis Using OpenSim
null
19th International Conference on Human-Computer Interaction, Jul 2017, Vancouver, Canada. 2017
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this research, attempts are made to conduct concrete muscle fatigue analysis of arbitrary motions on OpenSim, a digital human modeling platform. A plug-in is written on the base of a muscle fatigue model, which makes it possible to calculate the decline of force-output capability of each muscle along time. The plug-in is tested on a three-dimensional, 29 degree-of-freedom human model. Motion data is obtained by motion capturing during an arbitrary running at a speed of 3.96 m/s. Ten muscles are selected for concrete analysis. As a result, the force-output capability of these muscles reduced to 60%-70% after 10 minutes' running, on a general basis. Erector spinae, which loses 39.2% of its maximal capability, is found to be more fatigue-exposed than the others. The influence of subject attributes (fatigability) is evaluated and discussed.
[ { "created": "Tue, 16 May 2017 08:01:16 GMT", "version": "v1" } ]
2017-05-17
[ [ "Chang", "Jing", "", "IRCCyN" ], [ "Chablat", "Damien", "", "IRCCyN" ], [ "Bennis", "Fouad", "", "IRCCyN" ], [ "Ma", "Liang", "" ] ]
In this research, attempts are made to conduct concrete muscle fatigue analysis of arbitrary motions on OpenSim, a digital human modeling platform. A plug-in is written on the base of a muscle fatigue model, which makes it possible to calculate the decline of force-output capability of each muscle along time. The plug-in is tested on a three-dimensional, 29 degree-of-freedom human model. Motion data is obtained by motion capturing during an arbitrary running at a speed of 3.96 m/s. Ten muscles are selected for concrete analysis. As a result, the force-output capability of these muscles reduced to 60%-70% after 10 minutes' running, on a general basis. Erector spinae, which loses 39.2% of its maximal capability, is found to be more fatigue-exposed than the others. The influence of subject attributes (fatigability) is evaluated and discussed.
2210.06804
Leonardo Trujillo
Leonardo Trujillo, Paul Banse, Guillaume Beslon
Getting higher on rugged landscapes: Inversion mutations open access to fitter adaptive peaks in NK fitness landscapes
35 pages, 10 figures, accepted for publication in PLoS Computational Biology
PLoS Computational Biology October 31, 2022
10.1371/journal.pcbi.1010647
null
q-bio.PE nlin.AO physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Molecular evolution is often conceptualised as adaptive walks on rugged fitness landscapes, driven by mutations and constrained by incremental fitness selection. It is well known that epistasis shapes the ruggedness of the landscape's surface, outlining their topography (with high-fitness peaks separated by valleys of lower fitness genotypes). However, within the strong selection weak mutation (SSWM) limit, once an adaptive walk reaches a local peak, natural selection restricts passage through downstream paths and hampers any possibility of reaching higher fitness values. Here, in addition to the widely used point mutations, we introduce a minimal model of sequence inversions to simulate adaptive walks. We use the well known NK model to instantiate rugged landscapes. We show that adaptive walks can reach higher fitness values through inversion mutations, which, compared to point mutations, allows the evolutionary process to escape local fitness peaks. To elucidate the effects of this chromosomal rearrangement, we use a graph-theoretical representation of accessible mutants and show how new evolutionary paths are uncovered. The present model suggests a simple mechanistic rationale to analyse escapes from local fitness peaks in molecular evolution driven by (intragenic) structural inversions and reveals some consequences of the limits of point mutations for simulations of molecular evolution.
[ { "created": "Thu, 13 Oct 2022 07:34:36 GMT", "version": "v1" } ]
2022-11-03
[ [ "Trujillo", "Leonardo", "" ], [ "Banse", "Paul", "" ], [ "Beslon", "Guillaume", "" ] ]
Molecular evolution is often conceptualised as adaptive walks on rugged fitness landscapes, driven by mutations and constrained by incremental fitness selection. It is well known that epistasis shapes the ruggedness of the landscape's surface, outlining their topography (with high-fitness peaks separated by valleys of lower fitness genotypes). However, within the strong selection weak mutation (SSWM) limit, once an adaptive walk reaches a local peak, natural selection restricts passage through downstream paths and hampers any possibility of reaching higher fitness values. Here, in addition to the widely used point mutations, we introduce a minimal model of sequence inversions to simulate adaptive walks. We use the well known NK model to instantiate rugged landscapes. We show that adaptive walks can reach higher fitness values through inversion mutations, which, compared to point mutations, allows the evolutionary process to escape local fitness peaks. To elucidate the effects of this chromosomal rearrangement, we use a graph-theoretical representation of accessible mutants and show how new evolutionary paths are uncovered. The present model suggests a simple mechanistic rationale to analyse escapes from local fitness peaks in molecular evolution driven by (intragenic) structural inversions and reveals some consequences of the limits of point mutations for simulations of molecular evolution.
2004.05635
Fred Vermolen
Fred Vermolen
A Spatial Markov Chain Cellular Automata Model for the Spread of Viruses
13 pages, 10 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a Spatial Markov Chain model for the spread of viruses. The model is based on the principle to represent a graph connecting nodes, which represent humans. The vertices between the nodes represent relations between humans. In this way, a graph is connected in which the likelihood of infectious spread from person to person is determined by the intensity of interpersonal contact. Infectious transfer is determined by chance. The model is extended to incorporate various lockdown scenarios.
[ { "created": "Sun, 12 Apr 2020 15:43:43 GMT", "version": "v1" } ]
2020-04-14
[ [ "Vermolen", "Fred", "" ] ]
We consider a Spatial Markov Chain model for the spread of viruses. The model is based on the principle to represent a graph connecting nodes, which represent humans. The vertices between the nodes represent relations between humans. In this way, a graph is connected in which the likelihood of infectious spread from person to person is determined by the intensity of interpersonal contact. Infectious transfer is determined by chance. The model is extended to incorporate various lockdown scenarios.
2305.00296
Xerxes D. Arsiwalla
Xerxes D. Arsiwalla
A Cognitive Account of the Puzzle of Ideography
4 pages. Invited commentary. Accepted for publication in Behavioral and Brain Sciences, Cambridge University Press
null
null
null
q-bio.NC cs.CL
http://creativecommons.org/licenses/by/4.0/
In this commentary article to 'The Puzzle of Ideography' by Morin, we put forth a new cognitive account of the puzzle of ideography, that complements the standardization account of Morin. Efficient standardization of spoken language is phenomenologically attributed to a modality effect coupled with chunking of cognitive representations, further aided by multi-sensory integration and the serialized nature of attention. These cognitive mechanisms are crucial for explaining why languages dominate graphic codes for general-purpose human communication.
[ { "created": "Sat, 29 Apr 2023 16:13:13 GMT", "version": "v1" } ]
2023-05-02
[ [ "Arsiwalla", "Xerxes D.", "" ] ]
In this commentary article to 'The Puzzle of Ideography' by Morin, we put forth a new cognitive account of the puzzle of ideography, that complements the standardization account of Morin. Efficient standardization of spoken language is phenomenologically attributed to a modality effect coupled with chunking of cognitive representations, further aided by multi-sensory integration and the serialized nature of attention. These cognitive mechanisms are crucial for explaining why languages dominate graphic codes for general-purpose human communication.
1207.1478
Aaron Clauset
Aaron Clauset
How large should whales be?
7 pages, 3 figures, 2 data tables
PLOS ONE 8(1), e53967 (2013)
10.1371/journal.pone.0053967
null
q-bio.PE physics.bio-ph physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The evolution and distribution of species body sizes for terrestrial mammals is well-explained by a macroevolutionary tradeoff between short-term selective advantages and long-term extinction risks from increased species body size, unfolding above the 2g minimum size induced by thermoregulation in air. Here, we consider whether this same tradeoff, formalized as a constrained convection-reaction-diffusion system, can also explain the sizes of fully aquatic mammals, which have not previously been considered. By replacing the terrestrial minimum with a pelagic one, at roughly 7000g, the terrestrial mammal tradeoff model accurately predicts, with no tunable parameters, the observed body masses of all extant cetacean species, including the 175,000,000g Blue Whale. This strong agreement between theory and data suggests that a universal macroevolutionary tradeoff governs body size evolution for all mammals, regardless of their habitat. The dramatic sizes of cetaceans can thus be attributed mainly to the increased convective heat loss is water, which shifts the species size distribution upward and pushes its right tail into ranges inaccessible to terrestrial mammals. Under this macroevolutionary tradeoff, the largest expected species occurs where the rate at which smaller-bodied species move up into large-bodied niches approximately equals the rate at which extinction removes them.
[ { "created": "Thu, 5 Jul 2012 22:30:08 GMT", "version": "v1" }, { "created": "Thu, 10 Jan 2013 23:51:23 GMT", "version": "v2" } ]
2013-01-14
[ [ "Clauset", "Aaron", "" ] ]
The evolution and distribution of species body sizes for terrestrial mammals is well-explained by a macroevolutionary tradeoff between short-term selective advantages and long-term extinction risks from increased species body size, unfolding above the 2g minimum size induced by thermoregulation in air. Here, we consider whether this same tradeoff, formalized as a constrained convection-reaction-diffusion system, can also explain the sizes of fully aquatic mammals, which have not previously been considered. By replacing the terrestrial minimum with a pelagic one, at roughly 7000g, the terrestrial mammal tradeoff model accurately predicts, with no tunable parameters, the observed body masses of all extant cetacean species, including the 175,000,000g Blue Whale. This strong agreement between theory and data suggests that a universal macroevolutionary tradeoff governs body size evolution for all mammals, regardless of their habitat. The dramatic sizes of cetaceans can thus be attributed mainly to the increased convective heat loss is water, which shifts the species size distribution upward and pushes its right tail into ranges inaccessible to terrestrial mammals. Under this macroevolutionary tradeoff, the largest expected species occurs where the rate at which smaller-bodied species move up into large-bodied niches approximately equals the rate at which extinction removes them.
1203.6372
Heng Li
Heng Li
A statistical framework for SNP calling, mutation discovery, association mapping and population genetical parameter estimation from sequencing data
The published version (open access now) with addition of a test for multi-allelic sites
Bioinformatics (2011) 27:2987-93
10.1093/bioinformatics/btr509
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Most existing methods for DNA sequence analysis rely on accurate sequences or genotypes. However, in applications of the next-generation sequencing (NGS), accurate genotypes may not be easily obtained (e.g. multi-sample low-coverage sequencing or somatic mutation discovery). These applications press for the development of new methods for analyzing sequence data with uncertainty. Results: We present a statistical framework for calling SNPs, discovering somatic mutations, inferring population genetical parameters and performing association tests directly based on sequencing data without explicit genotyping or linkage-based imputation. On real data, we demonstrate that our method achieves comparable accuracy to alternative methods for estimating site allele count, for inferring allele frequency spectrum and for association mapping. We also highlight the necessity of using symmetric datasets for finding somatic mutations and confirm that for discovering rare events, mismapping is frequently the leading source of errors. Availability: http://samtools.sourceforge.net. Contact: hengli@broadinstitute.org.
[ { "created": "Wed, 28 Mar 2012 20:36:28 GMT", "version": "v1" }, { "created": "Tue, 3 Apr 2012 14:21:44 GMT", "version": "v2" }, { "created": "Sat, 16 Mar 2013 14:58:31 GMT", "version": "v3" } ]
2013-03-19
[ [ "Li", "Heng", "" ] ]
Motivation: Most existing methods for DNA sequence analysis rely on accurate sequences or genotypes. However, in applications of the next-generation sequencing (NGS), accurate genotypes may not be easily obtained (e.g. multi-sample low-coverage sequencing or somatic mutation discovery). These applications press for the development of new methods for analyzing sequence data with uncertainty. Results: We present a statistical framework for calling SNPs, discovering somatic mutations, inferring population genetical parameters and performing association tests directly based on sequencing data without explicit genotyping or linkage-based imputation. On real data, we demonstrate that our method achieves comparable accuracy to alternative methods for estimating site allele count, for inferring allele frequency spectrum and for association mapping. We also highlight the necessity of using symmetric datasets for finding somatic mutations and confirm that for discovering rare events, mismapping is frequently the leading source of errors. Availability: http://samtools.sourceforge.net. Contact: hengli@broadinstitute.org.
1210.2944
Tomasz Rutkowski
Zhenyu Cai, Shoji Makino, Takeshi Yamada, and Tomasz M. Rutkowski
Spatial Auditory BCI Paradigm Utilizing N200 and P300 Responses
APSIPA ASC 2012
null
null
null
q-bio.NC cs.SD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper presents our recent results obtained with a new auditory spatial localization based BCI paradigm in which the ERP shape differences at early latencies are employed to enhance the traditional P300 responses in an oddball experimental setting. The concept relies on the recent results in auditory neuroscience showing a possibility to differentiate early anterior contralateral responses to attended spatial sources. Contemporary stimuli-driven BCI paradigms benefit mostly from the P300 ERP latencies in so called "aha-response" settings. We show the further enhancement of the classification results in spatial auditory paradigms by incorporating the N200 latencies, which differentiate the brain responses to lateral, in relation to the subject head, sound locations in the auditory space. The results reveal that those early spatial auditory ERPs boost online classification results of the BCI application. The online BCI experiments with the multi-command BCI prototype support our research hypothesis with the higher classification results and the improved information-transfer-rates.
[ { "created": "Wed, 10 Oct 2012 14:59:26 GMT", "version": "v1" } ]
2012-10-11
[ [ "Cai", "Zhenyu", "" ], [ "Makino", "Shoji", "" ], [ "Yamada", "Takeshi", "" ], [ "Rutkowski", "Tomasz M.", "" ] ]
The paper presents our recent results obtained with a new auditory spatial localization based BCI paradigm in which the ERP shape differences at early latencies are employed to enhance the traditional P300 responses in an oddball experimental setting. The concept relies on the recent results in auditory neuroscience showing a possibility to differentiate early anterior contralateral responses to attended spatial sources. Contemporary stimuli-driven BCI paradigms benefit mostly from the P300 ERP latencies in so called "aha-response" settings. We show the further enhancement of the classification results in spatial auditory paradigms by incorporating the N200 latencies, which differentiate the brain responses to lateral, in relation to the subject head, sound locations in the auditory space. The results reveal that those early spatial auditory ERPs boost online classification results of the BCI application. The online BCI experiments with the multi-command BCI prototype support our research hypothesis with the higher classification results and the improved information-transfer-rates.
2004.02789
Thomas Larsen Dr.
Jamileh Javidpour, Eduardo Ramirez-Romero, and Thomas Larsen
The effect of temperature on basal metabolism of Mnemiopsis leidyi
6 pages, 3 figures
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To evaluate the influence of temperature on metabolic performance on the invasive ctenophore Mnemiopsis leidyi, we exposed fully acclimatized adults to conditions typical for the annual variability of the Western Baltic Sea region. We derived basal metabolic rates from oxygen consumption rates of adult M. leidyi specimens exposed to temperatures between 3.5{\deg}C to 20.5{\deg}C at a salinity of 22. We found a Q10 value of 3.67, which means that the carbon specific respiration rates are about 9 times greater at 20{\deg}C than 3{\deg}C. According to this rate, a small-sized individual 20 mm in oral-aboral length, would without feeding have enough nutrient reserves to survive 80 days at 3{\deg}C, but only 9 days at 20{\deg}C. Thus, prey availability during late summer is critical for M. leidyi population survival.
[ { "created": "Mon, 6 Apr 2020 16:31:21 GMT", "version": "v1" } ]
2020-04-07
[ [ "Javidpour", "Jamileh", "" ], [ "Ramirez-Romero", "Eduardo", "" ], [ "Larsen", "Thomas", "" ] ]
To evaluate the influence of temperature on metabolic performance on the invasive ctenophore Mnemiopsis leidyi, we exposed fully acclimatized adults to conditions typical for the annual variability of the Western Baltic Sea region. We derived basal metabolic rates from oxygen consumption rates of adult M. leidyi specimens exposed to temperatures between 3.5{\deg}C to 20.5{\deg}C at a salinity of 22. We found a Q10 value of 3.67, which means that the carbon specific respiration rates are about 9 times greater at 20{\deg}C than 3{\deg}C. According to this rate, a small-sized individual 20 mm in oral-aboral length, would without feeding have enough nutrient reserves to survive 80 days at 3{\deg}C, but only 9 days at 20{\deg}C. Thus, prey availability during late summer is critical for M. leidyi population survival.
q-bio/0508014
Emmanuel Tannenbaum
Emmanuel Tannenbaum
Selective advantage for multicellular replicative strategies: A two-cell example
4 pages, 2 figures, to be submitted to Physical Review Letters
null
10.1103/PhysRevE.73.010904
null
q-bio.PE q-bio.CB
null
This paper develops a quasispecies model where cells can adopt a two-cell survival strategy. Within this strategy, pairs of cells join together, at which point one of the cells sacrifices its own replicative ability for the sake of the other cell. We develop a simplified model for the evolutionary dynamics of this process, allowing us to solve for the steady-state using standard approaches from quasispecies theory. We find that our model exhibits two distinct regimes of behavior: At low concentrations of limiting resource, the two-cell strategy outcompetes the single-cell survival strategy, while at high concentrations of limiting resource, the single-cell survival strategy dominates. Associated with the two solution regimes of our model is a localization to delocalization transition over the portion of the genome coding for the multicell strategy, analogous to the error catastrophe in standard quasispecies models. The existence of such a transition indicates that multicellularity can emerge because natural selection does not act on specific cells, but rather on replicative strategies. Within this framework, individual cells become the means by which replicative strategies are propagated. Such a framework is therefore consistent with the concept that natural selection does not act on individuals, but rather on populations.
[ { "created": "Sat, 13 Aug 2005 17:28:44 GMT", "version": "v1" } ]
2009-11-11
[ [ "Tannenbaum", "Emmanuel", "" ] ]
This paper develops a quasispecies model where cells can adopt a two-cell survival strategy. Within this strategy, pairs of cells join together, at which point one of the cells sacrifices its own replicative ability for the sake of the other cell. We develop a simplified model for the evolutionary dynamics of this process, allowing us to solve for the steady-state using standard approaches from quasispecies theory. We find that our model exhibits two distinct regimes of behavior: At low concentrations of limiting resource, the two-cell strategy outcompetes the single-cell survival strategy, while at high concentrations of limiting resource, the single-cell survival strategy dominates. Associated with the two solution regimes of our model is a localization to delocalization transition over the portion of the genome coding for the multicell strategy, analogous to the error catastrophe in standard quasispecies models. The existence of such a transition indicates that multicellularity can emerge because natural selection does not act on specific cells, but rather on replicative strategies. Within this framework, individual cells become the means by which replicative strategies are propagated. Such a framework is therefore consistent with the concept that natural selection does not act on individuals, but rather on populations.
1709.03000
Ari Kahn
Ari E. Kahn, Elisabeth A. Karuza, Jean M. Vettel, Danielle S. Bassett
Network constraints on learnability of probabilistic motor sequences
29 pages, 4 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human learners are adept at grasping the complex relationships underlying incoming sequential input. In the present work, we formalize complex relationships as graph structures derived from temporal associations in motor sequences. Next, we explore the extent to which learners are sensitive to key variations in the topological properties inherent to those graph structures. Participants performed a probabilistic motor sequence task in which the order of button presses was determined by the traversal of graphs with modular, lattice-like, or random organization. Graph nodes each represented a unique button press and edges represented a transition between button presses. Results indicate that learning, indexed here by participants' response times, was strongly mediated by the graph's meso-scale organization, with modular graphs being associated with shorter response times than random and lattice graphs. Moreover, variations in a node's number of connections (degree) and a node's role in mediating long-distance communication (betweenness centrality) impacted graph learning, even after accounting for level of practice on that node. These results demonstrate that the graph architecture underlying temporal sequences of stimuli fundamentally constrains learning, and moreover that tools from network science provide a valuable framework for assessing how learners encode complex, temporally structured information.
[ { "created": "Sat, 9 Sep 2017 20:13:18 GMT", "version": "v1" }, { "created": "Wed, 25 Apr 2018 20:51:44 GMT", "version": "v2" }, { "created": "Tue, 30 Oct 2018 15:34:12 GMT", "version": "v3" } ]
2018-10-31
[ [ "Kahn", "Ari E.", "" ], [ "Karuza", "Elisabeth A.", "" ], [ "Vettel", "Jean M.", "" ], [ "Bassett", "Danielle S.", "" ] ]
Human learners are adept at grasping the complex relationships underlying incoming sequential input. In the present work, we formalize complex relationships as graph structures derived from temporal associations in motor sequences. Next, we explore the extent to which learners are sensitive to key variations in the topological properties inherent to those graph structures. Participants performed a probabilistic motor sequence task in which the order of button presses was determined by the traversal of graphs with modular, lattice-like, or random organization. Graph nodes each represented a unique button press and edges represented a transition between button presses. Results indicate that learning, indexed here by participants' response times, was strongly mediated by the graph's meso-scale organization, with modular graphs being associated with shorter response times than random and lattice graphs. Moreover, variations in a node's number of connections (degree) and a node's role in mediating long-distance communication (betweenness centrality) impacted graph learning, even after accounting for level of practice on that node. These results demonstrate that the graph architecture underlying temporal sequences of stimuli fundamentally constrains learning, and moreover that tools from network science provide a valuable framework for assessing how learners encode complex, temporally structured information.
2306.01403
Tuan Minh Pham
Tuan Minh Pham and Kunihiko Kaneko
Dynamical Theory for Adaptive Systems
30 pages and 2 figures
null
null
null
q-bio.PE cond-mat.dis-nn nlin.AO physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
The study of adaptive dynamics, involving many degrees of freedom on two separated timescales, one for fast changes of state variables and another for the slow adaptation of parameters controlling the former's dynamics is crucial for understanding feedback mechanisms underlying evolution and learning. We present a path-integral approach \`a la Martin-Siggia-Rose-De Dominicis-Janssen (MSRDJ) to analyse nonequilibrium phase transitions in such dynamical systems. As an illustration, we apply our framework to the adaptation of gene-regulatory networks under a dynamic genotype-phenotype map: phenotypic variations are shaped by the fast stochastic gene-expression dynamics and are coupled to the slowly evolving distribution of genotypes, each encoded by a network structure. We establish that under this map, genotypes corresponding to reciprocal networks of coherent feedback loops are selected within an intermediate range of environmental noise, leading to phenotypic robustness.
[ { "created": "Fri, 2 Jun 2023 09:49:20 GMT", "version": "v1" }, { "created": "Thu, 21 Sep 2023 15:52:58 GMT", "version": "v2" }, { "created": "Fri, 5 Jan 2024 15:33:34 GMT", "version": "v3" }, { "created": "Fri, 24 May 2024 22:01:54 GMT", "version": "v4" }, { "cre...
2024-08-06
[ [ "Pham", "Tuan Minh", "" ], [ "Kaneko", "Kunihiko", "" ] ]
The study of adaptive dynamics, involving many degrees of freedom on two separated timescales, one for fast changes of state variables and another for the slow adaptation of parameters controlling the former's dynamics is crucial for understanding feedback mechanisms underlying evolution and learning. We present a path-integral approach \`a la Martin-Siggia-Rose-De Dominicis-Janssen (MSRDJ) to analyse nonequilibrium phase transitions in such dynamical systems. As an illustration, we apply our framework to the adaptation of gene-regulatory networks under a dynamic genotype-phenotype map: phenotypic variations are shaped by the fast stochastic gene-expression dynamics and are coupled to the slowly evolving distribution of genotypes, each encoded by a network structure. We establish that under this map, genotypes corresponding to reciprocal networks of coherent feedback loops are selected within an intermediate range of environmental noise, leading to phenotypic robustness.
1908.03348
Giulio Biroli
Felix Roy, Matthieu Barbier, Giulio Biroli, Guy Bunin
Can endogenous fluctuations persist in high-diversity ecosystems?
null
null
null
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When can complex ecological interactions drive an entire ecosystem into a persistent non-equilibrium state, where species abundances keep fluctuating without going to extinction? We show that high-diversity spatially-extended systems, in which conditions vary somewhat between spatial locations, can exhibit chaotic dynamics which persist for extremely long times. We develop a theoretical framework, based on dynamical mean-field theory, to quantify the conditions under which these fluctuating states exist, and predict their properties. We uncover parallels with the persistence of externally-perturbed ecosystems, such as the role of perturbation strength, synchrony and correlation time. But uniquely to endogenous fluctuations, these properties arise from the species dynamics themselves, creating feedback loops between perturbation and response. A key result is that the fluctuation amplitude and species diversity are tightly linked, in particular fluctuations enable dramatically more species to coexist than at equilibrium in the very same system. Our findings highlight crucial differences between well-mixed and spatially-extended systems, with implications for experiments and their ability to reproduce natural dynamics. They shed light on the maintenance of biodiversity, and the strength and synchrony of fluctuations observed in natural systems.
[ { "created": "Fri, 9 Aug 2019 07:36:19 GMT", "version": "v1" }, { "created": "Mon, 26 Aug 2019 08:54:40 GMT", "version": "v2" } ]
2019-08-27
[ [ "Roy", "Felix", "" ], [ "Barbier", "Matthieu", "" ], [ "Biroli", "Giulio", "" ], [ "Bunin", "Guy", "" ] ]
When can complex ecological interactions drive an entire ecosystem into a persistent non-equilibrium state, where species abundances keep fluctuating without going to extinction? We show that high-diversity spatially-extended systems, in which conditions vary somewhat between spatial locations, can exhibit chaotic dynamics which persist for extremely long times. We develop a theoretical framework, based on dynamical mean-field theory, to quantify the conditions under which these fluctuating states exist, and predict their properties. We uncover parallels with the persistence of externally-perturbed ecosystems, such as the role of perturbation strength, synchrony and correlation time. But uniquely to endogenous fluctuations, these properties arise from the species dynamics themselves, creating feedback loops between perturbation and response. A key result is that the fluctuation amplitude and species diversity are tightly linked, in particular fluctuations enable dramatically more species to coexist than at equilibrium in the very same system. Our findings highlight crucial differences between well-mixed and spatially-extended systems, with implications for experiments and their ability to reproduce natural dynamics. They shed light on the maintenance of biodiversity, and the strength and synchrony of fluctuations observed in natural systems.
0802.3926
Amy Bauer
Amy L. Bauer, Trachette L. Jackson, Yi Jiang, Thimo Rohlf
Stochastic Network Model of Receptor Cross-Talk Predicts Anti-Angiogenic Effects
17 pages, 4 figures, 1 table
null
null
LA-UR 08-0706
q-bio.MN cond-mat.dis-nn nlin.CG q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cancer invasion and metastasis depend on angiogenesis. The cellular processes (growth, migration, and apoptosis) that occur during angiogenesis are tightly regulated by signaling molecules. Thus, understanding how cells synthesize multiple biochemical signals initiated by key external stimuli can lead to the development of novel therapeutic strategies to combat cancer. In the face of large amounts of disjoint experimental data generated from multitudes of laboratories using various assays, theoretical signal transduction models provide a framework to distill this vast amount of data. Such models offer an opportunity to formulate and test new hypotheses, and can be used to make experimentally verifiable predictions. This study is the first to propose a network model that highlights the cross-talk between the key receptors involved in angiogenesis, namely growth factor, integrin, and cadherin receptors. From available experimental data, we construct a stochastic Boolean network model of receptor cross-talk, and systematically analyze the dynamical stability of the network under continuous-time Boolean dynamics with a noisy production function. We find that the signal transduction network exhibits a robust and fast response to external signals, independent of the internal cell state. We derive an input-output table that maps external stimuli to cell phenotypes, which is extraordinarily stable against molecular noise with one important exception: an oscillatory feedback loop between the key signaling molecules RhoA and Rac1 is unstable under arbitrarily low noise, leading to erratic, dysfunctional cell motion. Finally, we show that the network exhibits an apoptotic response rate that increases with noise, suggesting that the probability of programmed cell death depends on cell health.
[ { "created": "Tue, 26 Feb 2008 22:23:06 GMT", "version": "v1" } ]
2008-03-05
[ [ "Bauer", "Amy L.", "" ], [ "Jackson", "Trachette L.", "" ], [ "Jiang", "Yi", "" ], [ "Rohlf", "Thimo", "" ] ]
Cancer invasion and metastasis depend on angiogenesis. The cellular processes (growth, migration, and apoptosis) that occur during angiogenesis are tightly regulated by signaling molecules. Thus, understanding how cells synthesize multiple biochemical signals initiated by key external stimuli can lead to the development of novel therapeutic strategies to combat cancer. In the face of large amounts of disjoint experimental data generated from multitudes of laboratories using various assays, theoretical signal transduction models provide a framework to distill this vast amount of data. Such models offer an opportunity to formulate and test new hypotheses, and can be used to make experimentally verifiable predictions. This study is the first to propose a network model that highlights the cross-talk between the key receptors involved in angiogenesis, namely growth factor, integrin, and cadherin receptors. From available experimental data, we construct a stochastic Boolean network model of receptor cross-talk, and systematically analyze the dynamical stability of the network under continuous-time Boolean dynamics with a noisy production function. We find that the signal transduction network exhibits a robust and fast response to external signals, independent of the internal cell state. We derive an input-output table that maps external stimuli to cell phenotypes, which is extraordinarily stable against molecular noise with one important exception: an oscillatory feedback loop between the key signaling molecules RhoA and Rac1 is unstable under arbitrarily low noise, leading to erratic, dysfunctional cell motion. Finally, we show that the network exhibits an apoptotic response rate that increases with noise, suggesting that the probability of programmed cell death depends on cell health.
2003.12667
Silvia Licciardi
Giuseppe Dattoli, Emanuele Di Palma, Silvia Licciardi, Elio Sabia
On the Evolution of Covid-19 in Italy: a Follow up Note
10 pages, 28 figures
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a previous note we made an analysis of the spreading of the COVID disease in Italy. We used a model based on the logistic and Hubbert functions, the analysis we exploited has shown limited usefulness in terms of predictions and failed in fixing fundamental indications like the point of inflection of the disease growth. In this note we elaborate on the previous model, using multi-logistic models and attempt a more realistic analysis.
[ { "created": "Fri, 27 Mar 2020 23:58:22 GMT", "version": "v1" } ]
2020-03-31
[ [ "Dattoli", "Giuseppe", "" ], [ "Di Palma", "Emanuele", "" ], [ "Licciardi", "Silvia", "" ], [ "Sabia", "Elio", "" ] ]
In a previous note we made an analysis of the spreading of the COVID disease in Italy. We used a model based on the logistic and Hubbert functions, the analysis we exploited has shown limited usefulness in terms of predictions and failed in fixing fundamental indications like the point of inflection of the disease growth. In this note we elaborate on the previous model, using multi-logistic models and attempt a more realistic analysis.
2207.08456
Sara Parmigiani PhD
Sara Parmigiani, Jessica M. Ross, Christopher Cline, Christopher B. Minasi, Juha Gogulski, Corey J Keller
Reliability and validity of TMS-EEG biomarkers
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Noninvasive brain stimulation and neuroimaging have revolutionized human neuroscience, with a multitude of applications including diagnostic subtyping, treatment optimization, and relapse prediction. It is therefore particularly relevant to identify robust and clinically valuable brain biomarkers linking symptoms to their underlying neural mechanisms. Brain biomarkers must be reproducible (i.e., have internal reliability) across similar experiments within a laboratory and be generalizable (i.e., have external reliability) across experimental setups, laboratories, brain regions, and disease states. However, reliability (internal and external) is not alone sufficient; biomarkers also must have validity. Validity describes closeness to a true measure of the underlying neural signal or disease state. We propose that these two metrics, reliability and validity, should be evaluated and optimized before any biomarker is used to inform treatment decisions. Here, we discuss these metrics with respect to causal brain connectivity biomarkers from coupling transcranial magnetic stimulation (TMS) with electroencephalography (EEG). We discuss controversies around TMS-EEG stemming from the multiple large off-target components (noise) and relatively weak genuine brain responses (signal), as is unfortunately often the case with human neuroscience. We review the current state of TMS-EEG recordings, which consist of a mix of reliable noise and unreliable signal. We describe methods for evaluating TMS-EEG biomarkers, including how to assess internal and external reliability across facilities, cognitive states, brain networks, and disorders, and how to validate these biomarkers using invasive neural recordings or treatment response. We provide recommendations to increase reliability and validity, discuss lessons learned, and suggest future directions for the field.
[ { "created": "Mon, 18 Jul 2022 09:23:19 GMT", "version": "v1" } ]
2022-07-19
[ [ "Parmigiani", "Sara", "" ], [ "Ross", "Jessica M.", "" ], [ "Cline", "Christopher", "" ], [ "Minasi", "Christopher B.", "" ], [ "Gogulski", "Juha", "" ], [ "Keller", "Corey J", "" ] ]
Noninvasive brain stimulation and neuroimaging have revolutionized human neuroscience, with a multitude of applications including diagnostic subtyping, treatment optimization, and relapse prediction. It is therefore particularly relevant to identify robust and clinically valuable brain biomarkers linking symptoms to their underlying neural mechanisms. Brain biomarkers must be reproducible (i.e., have internal reliability) across similar experiments within a laboratory and be generalizable (i.e., have external reliability) across experimental setups, laboratories, brain regions, and disease states. However, reliability (internal and external) is not alone sufficient; biomarkers also must have validity. Validity describes closeness to a true measure of the underlying neural signal or disease state. We propose that these two metrics, reliability and validity, should be evaluated and optimized before any biomarker is used to inform treatment decisions. Here, we discuss these metrics with respect to causal brain connectivity biomarkers from coupling transcranial magnetic stimulation (TMS) with electroencephalography (EEG). We discuss controversies around TMS-EEG stemming from the multiple large off-target components (noise) and relatively weak genuine brain responses (signal), as is unfortunately often the case with human neuroscience. We review the current state of TMS-EEG recordings, which consist of a mix of reliable noise and unreliable signal. We describe methods for evaluating TMS-EEG biomarkers, including how to assess internal and external reliability across facilities, cognitive states, brain networks, and disorders, and how to validate these biomarkers using invasive neural recordings or treatment response. We provide recommendations to increase reliability and validity, discuss lessons learned, and suggest future directions for the field.
1411.1917
Vladimir Privman
Vladimir Privman, Evgeny Katz
Can bio-inspired information processing steps be realized as synthetic biochemical processes?
null
Physica Status Solidi A 212, 219-228 (2015)
10.1002/pssa.201400131
VP-260
q-bio.MN cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider possible designs and experimental realiza-tions in synthesized rather than naturally occurring bio-chemical systems of a selection of basic bio-inspired information processing steps. These include feed-forward loops, which have been identified as the most common information processing motifs in many natural pathways in cellular functioning, and memory-involving processes, specifically, associative memory. Such systems should not be designed to literally mimic nature. Rather, we can be guided by nature's mechanisms for experimenting with new information/signal processing steps which are based on coupled biochemical reactions, but are vastly simpler than natural processes, and which will provide tools for the long-term goal of understanding and harnessing nature's information processing paradigm. Our biochemical processes of choice are enzymatic cascades because of their compatibility with physiological processes in vivo and with electronics (e.g., electrodes) in vitro allowing for networking and interfacing of enzyme-catalyzed processes with other chemical and biochemical reactions. In addition to designing and realizing feed-forward loops and other processes, one has to develop approaches to probe their response to external control of the time-dependence of the input(s), by measuring the resulting time-dependence of the output. The goal will be to demonstrate the expected features, for example, the delayed response and stabilizing effect of the feed-forward loops.
[ { "created": "Fri, 7 Nov 2014 13:58:31 GMT", "version": "v1" } ]
2015-02-09
[ [ "Privman", "Vladimir", "" ], [ "Katz", "Evgeny", "" ] ]
We consider possible designs and experimental realiza-tions in synthesized rather than naturally occurring bio-chemical systems of a selection of basic bio-inspired information processing steps. These include feed-forward loops, which have been identified as the most common information processing motifs in many natural pathways in cellular functioning, and memory-involving processes, specifically, associative memory. Such systems should not be designed to literally mimic nature. Rather, we can be guided by nature's mechanisms for experimenting with new information/signal processing steps which are based on coupled biochemical reactions, but are vastly simpler than natural processes, and which will provide tools for the long-term goal of understanding and harnessing nature's information processing paradigm. Our biochemical processes of choice are enzymatic cascades because of their compatibility with physiological processes in vivo and with electronics (e.g., electrodes) in vitro allowing for networking and interfacing of enzyme-catalyzed processes with other chemical and biochemical reactions. In addition to designing and realizing feed-forward loops and other processes, one has to develop approaches to probe their response to external control of the time-dependence of the input(s), by measuring the resulting time-dependence of the output. The goal will be to demonstrate the expected features, for example, the delayed response and stabilizing effect of the feed-forward loops.
0708.0342
Tomoshiro Ochiai
J.C. Nacher and T. Ochiai
Transcription and noise in negative feedback loops
Latex, 13 pages, 4 figures
null
null
null
q-bio.MN
null
Recently, several studies have investigated the transcription process associated to specific genetic regulatory networks. In this work, we present a stochastic approach for analyzing the dynamics and effect of negative feedback loops (FBL) on the transcriptional noise. First, our analysis allows us to identify a bimodal activity depending of the strength of self-repression coupling D. In the strong coupling region D>>1, the variance of the transcriptional noise is found to be reduced a 28 % more than described earlier. Secondly, the contribution of the noise effect to the abundance of regulating protein becomes manifest when the coefficient of variation is computed. In the strong coupling region, this coefficient is found to be independent of all parameters and in fair agreement with the experimentally observed values. Finally, our analysis reveals that the regulating protein is significantly induced by the intrinsic and external noise in the strong coupling region. In short, it indicates that the existence of inherent noise in FBL makes it possible to produce a basal amount of proteins even though the repression level D is very strong.
[ { "created": "Thu, 2 Aug 2007 13:25:21 GMT", "version": "v1" } ]
2007-08-20
[ [ "Nacher", "J. C.", "" ], [ "Ochiai", "T.", "" ] ]
Recently, several studies have investigated the transcription process associated to specific genetic regulatory networks. In this work, we present a stochastic approach for analyzing the dynamics and effect of negative feedback loops (FBL) on the transcriptional noise. First, our analysis allows us to identify a bimodal activity depending of the strength of self-repression coupling D. In the strong coupling region D>>1, the variance of the transcriptional noise is found to be reduced a 28 % more than described earlier. Secondly, the contribution of the noise effect to the abundance of regulating protein becomes manifest when the coefficient of variation is computed. In the strong coupling region, this coefficient is found to be independent of all parameters and in fair agreement with the experimentally observed values. Finally, our analysis reveals that the regulating protein is significantly induced by the intrinsic and external noise in the strong coupling region. In short, it indicates that the existence of inherent noise in FBL makes it possible to produce a basal amount of proteins even though the repression level D is very strong.
1304.3160
Rori Rohlfs
Rori V. Rohlfs, Erin Murphy, Yun S. Song, Montgomery Slatkin
The influence of relatives on the efficiency and error rate of familial searching
main text: 19 pages, 4 tables, 2 figures supplemental text: 2 pages, 5 tables all together as single file
PLoS ONE 8(8): e70495 (2013)
10.1371/journal.pone.0070495
null
q-bio.GN stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the consequences of adopting the criteria used by the state of California, as described by Myers et al. (2011), for conducting familial searches. We carried out a simulation study of randomly generated profiles of related and unrelated individuals with 13-locus CODIS genotypes and YFiler Y-chromosome haplotypes, on which the Myers protocol for relative identification was carried out. For Y-chromosome sharing first degree relatives, the Myers protocol has a high probability (80 - 99%) of identifying their relationship. For unrelated individuals, there is a low probability that an unrelated person in the database will be identified as a first-degree relative. For more distant Y-haplotype sharing relatives (half-siblings, first cousins, half-first cousins or second cousins) there is a substantial probability that the more distant relative will be incorrectly identified as a first-degree relative. For example, there is a 3 - 18% probability that a first cousin will be identified as a full sibling, with the probability depending on the population background. Although the California familial search policy is likely to identify a first degree relative if his profile is in the database, and it poses little risk of falsely identifying an unrelated individual in a database as a first-degree relative, there is a substantial risk of falsely identifying a more distant Y-haplotype sharing relative in the database as a first-degree relative, with the consequence that their immediate family may become the target for further investigation. This risk falls disproportionately on those ethnic groups that are currently overrepresented in state and federal databases.
[ { "created": "Wed, 10 Apr 2013 22:53:47 GMT", "version": "v1" }, { "created": "Wed, 14 Aug 2013 22:16:03 GMT", "version": "v2" } ]
2015-06-15
[ [ "Rohlfs", "Rori V.", "" ], [ "Murphy", "Erin", "" ], [ "Song", "Yun S.", "" ], [ "Slatkin", "Montgomery", "" ] ]
We investigate the consequences of adopting the criteria used by the state of California, as described by Myers et al. (2011), for conducting familial searches. We carried out a simulation study of randomly generated profiles of related and unrelated individuals with 13-locus CODIS genotypes and YFiler Y-chromosome haplotypes, on which the Myers protocol for relative identification was carried out. For Y-chromosome sharing first degree relatives, the Myers protocol has a high probability (80 - 99%) of identifying their relationship. For unrelated individuals, there is a low probability that an unrelated person in the database will be identified as a first-degree relative. For more distant Y-haplotype sharing relatives (half-siblings, first cousins, half-first cousins or second cousins) there is a substantial probability that the more distant relative will be incorrectly identified as a first-degree relative. For example, there is a 3 - 18% probability that a first cousin will be identified as a full sibling, with the probability depending on the population background. Although the California familial search policy is likely to identify a first degree relative if his profile is in the database, and it poses little risk of falsely identifying an unrelated individual in a database as a first-degree relative, there is a substantial risk of falsely identifying a more distant Y-haplotype sharing relative in the database as a first-degree relative, with the consequence that their immediate family may become the target for further investigation. This risk falls disproportionately on those ethnic groups that are currently overrepresented in state and federal databases.
1908.00723
Jin Jun Li
Jin Li
Universal Transforming Geometric Network
null
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recurrent geometric network (RGN), the first end-to-end differentiable neural architecture for protein structure prediction, is a competitive alternative to existing models. However, the RGN's use of recurrent neural networks (RNNs) as internal representations results in long training time and unstable gradients. And because of its sequential nature, it is less effective at learning global dependencies among amino acids than existing transformer architectures. We propose the Universal Transforming Geometric Network (UTGN), an end-to-end differentiable model that uses the encoder portion of the Universal Transformer architecture as an alternative for internal representations. Our experiments show that compared to RGN, UTGN achieve a $1.7$ \si{\angstrom} improvement on the free modeling portion and a $0.7$ \si{\angstrom} improvement on the template based modeling of the CASP12 competition.
[ { "created": "Fri, 2 Aug 2019 07:14:08 GMT", "version": "v1" } ]
2019-08-05
[ [ "Li", "Jin", "" ] ]
The recurrent geometric network (RGN), the first end-to-end differentiable neural architecture for protein structure prediction, is a competitive alternative to existing models. However, the RGN's use of recurrent neural networks (RNNs) as internal representations results in long training time and unstable gradients. And because of its sequential nature, it is less effective at learning global dependencies among amino acids than existing transformer architectures. We propose the Universal Transforming Geometric Network (UTGN), an end-to-end differentiable model that uses the encoder portion of the Universal Transformer architecture as an alternative for internal representations. Our experiments show that compared to RGN, UTGN achieve a $1.7$ \si{\angstrom} improvement on the free modeling portion and a $0.7$ \si{\angstrom} improvement on the template based modeling of the CASP12 competition.
1710.02431
Jonathan Karr
Arthur P. Goldberg, Bal\'azs Szigeti, Yin Hoon Chew, John A. P. Sekar, Yosef D. Roth, Jonathan R. Karr
Emerging whole-cell modeling principles and methods
10 pages, 2 figures, 7 supplementary tables
null
10.1016/j.copbio.2017.12.013
null
q-bio.QM
http://creativecommons.org/publicdomain/zero/1.0/
Whole-cell computational models aim to predict cellular phenotypes from genotype by representing the entire genome, the structure and concentration of each molecular species, each molecular interaction, and the extracellular environment. Whole-cell models have great potential to transform bioscience, bioengineering, and medicine. However, numerous challenges remain to achieve whole-cell models. Nevertheless, researchers are beginning to leverage recent progress in measurement technology, bioinformatics, data sharing, rule-based modeling, and multi-algorithmic simulation to build the first whole-cell models. We anticipate that ongoing efforts to develop scalable whole-cell modeling tools will enable dramatically more comprehensive and more accurate models, including models of human cells.
[ { "created": "Fri, 6 Oct 2017 14:41:05 GMT", "version": "v1" }, { "created": "Fri, 8 Dec 2017 17:37:24 GMT", "version": "v2" } ]
2019-09-05
[ [ "Goldberg", "Arthur P.", "" ], [ "Szigeti", "Balázs", "" ], [ "Chew", "Yin Hoon", "" ], [ "Sekar", "John A. P.", "" ], [ "Roth", "Yosef D.", "" ], [ "Karr", "Jonathan R.", "" ] ]
Whole-cell computational models aim to predict cellular phenotypes from genotype by representing the entire genome, the structure and concentration of each molecular species, each molecular interaction, and the extracellular environment. Whole-cell models have great potential to transform bioscience, bioengineering, and medicine. However, numerous challenges remain to achieve whole-cell models. Nevertheless, researchers are beginning to leverage recent progress in measurement technology, bioinformatics, data sharing, rule-based modeling, and multi-algorithmic simulation to build the first whole-cell models. We anticipate that ongoing efforts to develop scalable whole-cell modeling tools will enable dramatically more comprehensive and more accurate models, including models of human cells.
2306.14200
Hon-Cheong So
Hon-Cheong So, Xiao Xue, Pak-Chung Sham
SumVg: Total heritability explained by all variants in genome-wide association studies based on summary statistics with standard error estimates
null
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by-nc-nd/4.0/
Genome-wide association studies (GWAS) are commonly employed to study the genetic basis of complex traits and diseases, and a key question is how much heritability could be explained by all variants in GWAS. One widely used approach that relies on summary statistics only is LD score regression (LDSC), however the approach requires certain assumptions on the SNP effects (all SNPs contribute to heritability and each SNP contributes equal variance). More flexible modeling methods may be useful. We previously developed an approach recovering the true z-statistics from a set of observed z-statistics with an empirical Bayes approach, using only summary statistics. However, methods for standard error (SE) estimation are not available yet, limiting the interpretation of results and applicability of the approach. In this study we developed several resampling-based approaches to estimate the SE of SNP-based heritability, including two jackknife and three parametric bootstrap methods. Simulations showed that delete-d-jackknife and parametric bootstrap approaches provide good estimates of the SE. Particularly, the parametric bootstrap approaches yield the lowest root-mean-squared-error (RMSE) of the true SE. In addition, we applied our method to estimate SNP-based heritability of 12 immune-related traits (levels of cytokines and growth factors) to shed light on their genetic architecture. We also implemented the methods to compute the sum of heritability explained and the corresponding SE in an R package SumVg, available at https://github.com/lab-hcso/Estimating-SE-of-total-heritability/ . In conclusion, SumVg may provide a useful alternative tool for SNP heritability and SE estimates, which does not rely on distributional assumptions of SNP effects.
[ { "created": "Sun, 25 Jun 2023 10:32:56 GMT", "version": "v1" } ]
2023-06-27
[ [ "So", "Hon-Cheong", "" ], [ "Xue", "Xiao", "" ], [ "Sham", "Pak-Chung", "" ] ]
Genome-wide association studies (GWAS) are commonly employed to study the genetic basis of complex traits and diseases, and a key question is how much heritability could be explained by all variants in GWAS. One widely used approach that relies on summary statistics only is LD score regression (LDSC), however the approach requires certain assumptions on the SNP effects (all SNPs contribute to heritability and each SNP contributes equal variance). More flexible modeling methods may be useful. We previously developed an approach recovering the true z-statistics from a set of observed z-statistics with an empirical Bayes approach, using only summary statistics. However, methods for standard error (SE) estimation are not available yet, limiting the interpretation of results and applicability of the approach. In this study we developed several resampling-based approaches to estimate the SE of SNP-based heritability, including two jackknife and three parametric bootstrap methods. Simulations showed that delete-d-jackknife and parametric bootstrap approaches provide good estimates of the SE. Particularly, the parametric bootstrap approaches yield the lowest root-mean-squared-error (RMSE) of the true SE. In addition, we applied our method to estimate SNP-based heritability of 12 immune-related traits (levels of cytokines and growth factors) to shed light on their genetic architecture. We also implemented the methods to compute the sum of heritability explained and the corresponding SE in an R package SumVg, available at https://github.com/lab-hcso/Estimating-SE-of-total-heritability/ . In conclusion, SumVg may provide a useful alternative tool for SNP heritability and SE estimates, which does not rely on distributional assumptions of SNP effects.
1609.00900
Ta\c{s}k{\i}n Deniz
Taskin Deniz, Stefan Rotter
Joint Statistics of Strongly Correlated Neurons via Dimensional Reduction
40 pages, 11 figures. Submitted to IOP-Journal of Physics A
null
10.1088/1751-8121/aa677e
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The relative timing of action potentials in neurons recorded from local cortical networks often shows a non-trivial dependence, which is then quantified by cross-correlation functions. Theoretical models emphasize that such spike train correlations are an inevitable consequence of two neurons being part of the same network and sharing some synaptic input. For non-linear neuron models, however, explicit correlation functions are difficult to compute analytically, and perturbative methods work only for weak shared input. In order to treat strong correlations, we suggest here an alternative non-perturbative method. Specifically, we study the case of two leaky integrate-and-fire neurons with strong shared input. Correlation functions derived from simulated spike trains fit our theoretical predictions very accurately. Using our method, we computed the non-linear correlation transfer as well as correlation functions that are asymmetric due to inhomogeneous intrinsic parameters or unequal input.
[ { "created": "Sun, 4 Sep 2016 08:12:11 GMT", "version": "v1" } ]
2017-06-28
[ [ "Deniz", "Taskin", "" ], [ "Rotter", "Stefan", "" ] ]
The relative timing of action potentials in neurons recorded from local cortical networks often shows a non-trivial dependence, which is then quantified by cross-correlation functions. Theoretical models emphasize that such spike train correlations are an inevitable consequence of two neurons being part of the same network and sharing some synaptic input. For non-linear neuron models, however, explicit correlation functions are difficult to compute analytically, and perturbative methods work only for weak shared input. In order to treat strong correlations, we suggest here an alternative non-perturbative method. Specifically, we study the case of two leaky integrate-and-fire neurons with strong shared input. Correlation functions derived from simulated spike trains fit our theoretical predictions very accurately. Using our method, we computed the non-linear correlation transfer as well as correlation functions that are asymmetric due to inhomogeneous intrinsic parameters or unequal input.
1805.08626
Stefano De Blasi
Stefano De Blasi
Simulation of Large Scale Neural Networks for Evaluation Applications
Poster 2018, Prague May 10
null
null
null
q-bio.NC eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the complexity of biological neural networks like the human brain is one of the scientific challenges of our century. The organization of the brain can be described at different levels, ranging from small neural networks to entire brain regions. Existing methods for the description of functionally or effective connectivity are based on the analysis of relations between the activities of different neural units by detecting correlations or information flow. This is a crucial step in understanding neural disorders like Alzheimers disease and their causative factors. To evaluate these estimation methods, it is necessary to refer to a neural network with known connectivity, which is typically unknown for natural biological neural networks. Therefore, network simulations, also in silico, are available. In this work, the in silico simulation of large scale neural networks is established and the influence of different topologies on the generated patterns of neuronal signals is investigated. The goal is to develop standard evaluation methods for neurocomputational algorithms with a realistic large scale model to enable benchmarking and comparability of different studies.
[ { "created": "Sun, 20 May 2018 15:03:04 GMT", "version": "v1" } ]
2018-05-23
[ [ "De Blasi", "Stefano", "" ] ]
Understanding the complexity of biological neural networks like the human brain is one of the scientific challenges of our century. The organization of the brain can be described at different levels, ranging from small neural networks to entire brain regions. Existing methods for the description of functionally or effective connectivity are based on the analysis of relations between the activities of different neural units by detecting correlations or information flow. This is a crucial step in understanding neural disorders like Alzheimers disease and their causative factors. To evaluate these estimation methods, it is necessary to refer to a neural network with known connectivity, which is typically unknown for natural biological neural networks. Therefore, network simulations, also in silico, are available. In this work, the in silico simulation of large scale neural networks is established and the influence of different topologies on the generated patterns of neuronal signals is investigated. The goal is to develop standard evaluation methods for neurocomputational algorithms with a realistic large scale model to enable benchmarking and comparability of different studies.
2110.07117
Richard Reeve
Sonia Natalie Mitchell, Andrew Lahiff, Nathan Cummings, Jonathan Hollocombe, Bram Boskamp, Ryan Field, Dennis Reddyhoff, Kristian Zarebski, Antony Wilson, Bruno Viola, Martin Burke, Blair Archibald, Paul Bessell, Richard Blackwell, Lisa A Boden, Alys Brett, Sam Brett, Ruth Dundas, Jessica Enright, Alejandra N. Gonzalez-Beltran, Claire Harris, Ian Hinder, Christopher David Hughes, Martin Knight, Vino Mano, Ciaran McMonagle, Dominic Mellor, Sibylle Mohr, Glenn Marion, Louise Matthews, Iain J. McKendrick, Christopher Mark Pooley, Thibaud Porphyre, Aaron Reeves, Edward Townsend, Robert Turner, Jeremy Walton, Richard Reeve
FAIR Data Pipeline: provenance-driven data management for traceable scientific workflows
null
null
10.1098/rsta.2021.0300
null
q-bio.QM cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern epidemiological analyses to understand and combat the spread of disease depend critically on access to, and use of, data. Rapidly evolving data, such as data streams changing during a disease outbreak, are particularly challenging. Data management is further complicated by data being imprecisely identified when used. Public trust in policy decisions resulting from such analyses is easily damaged and is often low, with cynicism arising where claims of "following the science" are made without accompanying evidence. Tracing the provenance of such decisions back through open software to primary data would clarify this evidence, enhancing the transparency of the decision-making process. Here, we demonstrate a Findable, Accessible, Interoperable and Reusable (FAIR) data pipeline developed during the COVID-19 pandemic that allows easy annotation of data as they are consumed by analyses, while tracing the provenance of scientific outputs back through the analytical source code to data sources. Such a tool provides a mechanism for the public, and fellow scientists, to better assess the trust that should be placed in scientific evidence, while allowing scientists to support policy-makers in openly justifying their decisions. We believe that tools such as this should be promoted for use across all areas of policy-facing research.
[ { "created": "Thu, 14 Oct 2021 02:05:16 GMT", "version": "v1" }, { "created": "Wed, 4 May 2022 22:44:17 GMT", "version": "v2" } ]
2022-10-12
[ [ "Mitchell", "Sonia Natalie", "" ], [ "Lahiff", "Andrew", "" ], [ "Cummings", "Nathan", "" ], [ "Hollocombe", "Jonathan", "" ], [ "Boskamp", "Bram", "" ], [ "Field", "Ryan", "" ], [ "Reddyhoff", "Dennis", ""...
Modern epidemiological analyses to understand and combat the spread of disease depend critically on access to, and use of, data. Rapidly evolving data, such as data streams changing during a disease outbreak, are particularly challenging. Data management is further complicated by data being imprecisely identified when used. Public trust in policy decisions resulting from such analyses is easily damaged and is often low, with cynicism arising where claims of "following the science" are made without accompanying evidence. Tracing the provenance of such decisions back through open software to primary data would clarify this evidence, enhancing the transparency of the decision-making process. Here, we demonstrate a Findable, Accessible, Interoperable and Reusable (FAIR) data pipeline developed during the COVID-19 pandemic that allows easy annotation of data as they are consumed by analyses, while tracing the provenance of scientific outputs back through the analytical source code to data sources. Such a tool provides a mechanism for the public, and fellow scientists, to better assess the trust that should be placed in scientific evidence, while allowing scientists to support policy-makers in openly justifying their decisions. We believe that tools such as this should be promoted for use across all areas of policy-facing research.
0707.1503
Ricardo V\^encio
Ricardo V\^encio and Ilya Shmulevich
ProbCD: enrichment analysis accounting for categorization uncertainty
16 pages, 3 figures, submitted to a journal in the Bioinformatics field
BMC Bioinformatics 2007, 8:383
10.1186/1471-2105-8-383
null
q-bio.QM q-bio.GN
null
As in many other areas of science, systems biology makes extensive use of statistical association and significance estimates in contingency tables, a type of categorical data analysis known in this field as enrichment (also over-representation or enhancement) analysis. In spite of efforts to create probabilistic annotations, especially in the Gene Ontology context, or to deal with uncertainty in high throughput-based datasets, current enrichment methods largely ignore this probabilistic information since they are mainly based on variants of the Fisher Exact Test. We developed an open-source R package to deal with probabilistic categorical data analysis, ProbCD, that does not require a static contingency table. The contingency table for the enrichment problem is built using the expectation of a Bernoulli Scheme stochastic process given the categorization probabilities. An on-line interface was created to allow usage by non-programmers and is available at: http://xerad.systemsbiology.net/ProbCD/ . We present an analysis framework and software tools to address the issue of uncertainty in categorical data analysis. In particular, concerning the enrichment analysis, ProbCD can accommodate: (i) the stochastic nature of the high-throughput experimental techniques and (ii) probabilistic gene annotation.
[ { "created": "Tue, 10 Jul 2007 17:42:34 GMT", "version": "v1" } ]
2011-11-10
[ [ "Vêncio", "Ricardo", "" ], [ "Shmulevich", "Ilya", "" ] ]
As in many other areas of science, systems biology makes extensive use of statistical association and significance estimates in contingency tables, a type of categorical data analysis known in this field as enrichment (also over-representation or enhancement) analysis. In spite of efforts to create probabilistic annotations, especially in the Gene Ontology context, or to deal with uncertainty in high throughput-based datasets, current enrichment methods largely ignore this probabilistic information since they are mainly based on variants of the Fisher Exact Test. We developed an open-source R package to deal with probabilistic categorical data analysis, ProbCD, that does not require a static contingency table. The contingency table for the enrichment problem is built using the expectation of a Bernoulli Scheme stochastic process given the categorization probabilities. An on-line interface was created to allow usage by non-programmers and is available at: http://xerad.systemsbiology.net/ProbCD/ . We present an analysis framework and software tools to address the issue of uncertainty in categorical data analysis. In particular, concerning the enrichment analysis, ProbCD can accommodate: (i) the stochastic nature of the high-throughput experimental techniques and (ii) probabilistic gene annotation.
2201.11164
Julian Gendreau
Garrett Garner, Daniel Streetman, Joshua Fricker, Neal Patel, Nolan Brown, Shane Shahrestani, Julian Gendreau
Focal cortical dysplasia as a cause of epilepsy: the current evidence of associated genes and future therapeutic treatments
null
null
null
null
q-bio.NC q-bio.GN q-bio.SC
http://creativecommons.org/licenses/by/4.0/
Focal cortical dysplasias (FCDs) are the most common cause of treatment resistant epilepsy affecting the pediatric population. Most individuals with FCD have seizure onset during the first five years of life and the majority will have seizures by the age of sixteen. Many cases of FCD are postulated to be the result of abnormal brain development in utero by germline or somatic gene mutations regulating neuronal growth and migration during corticogenesis. Other cases of FCD are thought to be related to infections during brain development, or even other causes still unable to be fully determined. Typical anti-seizure medications are oftentimes ineffective in FCD as well as surgery is unable to be successfully performed due to the involvement of eloquent areas of the brain or insufficient resection of the epileptogenic focus, posing a challenge for physicians. The genetic nature of FCD provides an avenue for drug development with several genetic and molecular targets undergoing study over the last two decades.
[ { "created": "Wed, 26 Jan 2022 20:04:09 GMT", "version": "v1" } ]
2022-01-28
[ [ "Garner", "Garrett", "" ], [ "Streetman", "Daniel", "" ], [ "Fricker", "Joshua", "" ], [ "Patel", "Neal", "" ], [ "Brown", "Nolan", "" ], [ "Shahrestani", "Shane", "" ], [ "Gendreau", "Julian", "" ] ]
Focal cortical dysplasias (FCDs) are the most common cause of treatment resistant epilepsy affecting the pediatric population. Most individuals with FCD have seizure onset during the first five years of life and the majority will have seizures by the age of sixteen. Many cases of FCD are postulated to be the result of abnormal brain development in utero by germline or somatic gene mutations regulating neuronal growth and migration during corticogenesis. Other cases of FCD are thought to be related to infections during brain development, or even other causes still unable to be fully determined. Typical anti-seizure medications are oftentimes ineffective in FCD as well as surgery is unable to be successfully performed due to the involvement of eloquent areas of the brain or insufficient resection of the epileptogenic focus, posing a challenge for physicians. The genetic nature of FCD provides an avenue for drug development with several genetic and molecular targets undergoing study over the last two decades.
1808.08820
Jianqiang Lin
J. Lin, S. Guha, S. Ramanathan
Vanadium dioxide circuits emulate neurological disorders
25 pages, 14 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Information in the central nervous system (CNS) is conducted via electrical signals known as action potentials and is encoded in time. Several neurological disorders including depression, Attention Deficit Hyperactivity Disorder (ADHD), originate in faulty brain signaling frequencies. Here, we present a Hodgkin-Huxley model analog for a strongly correlated VO2 artificial neuron system that undergoes an electrically-driven insulator-metal transition. We demonstrate that tuning of the insulating phase resistance in VO2 threshold switch circuits can enable direct mimicry of neuronal origins of disorders in the central nervous system. The results introduce use of circuits based on quantum materials as complementary to model animal studies for neuroscience, especially when precise measurements of local electrical properties or competing parallel paths for conduction in complex neural circuits can be a challenge to identify onset of breakdown or diagnose early symptoms of disease.
[ { "created": "Mon, 27 Aug 2018 12:40:31 GMT", "version": "v1" } ]
2018-08-28
[ [ "Lin", "J.", "" ], [ "Guha", "S.", "" ], [ "Ramanathan", "S.", "" ] ]
Information in the central nervous system (CNS) is conducted via electrical signals known as action potentials and is encoded in time. Several neurological disorders including depression, Attention Deficit Hyperactivity Disorder (ADHD), originate in faulty brain signaling frequencies. Here, we present a Hodgkin-Huxley model analog for a strongly correlated VO2 artificial neuron system that undergoes an electrically-driven insulator-metal transition. We demonstrate that tuning of the insulating phase resistance in VO2 threshold switch circuits can enable direct mimicry of neuronal origins of disorders in the central nervous system. The results introduce use of circuits based on quantum materials as complementary to model animal studies for neuroscience, especially when precise measurements of local electrical properties or competing parallel paths for conduction in complex neural circuits can be a challenge to identify onset of breakdown or diagnose early symptoms of disease.
1305.6202
Mark Robinson
Mark D. Robinson
Agreeing to disagree, some ironies, disappointing scientific practice and a call for better: reply to <<The poor performance of TMM on microRNA-Seq>>
5 pages, 3 Supplemental PDFs
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This letter is a response to a Divergent Views article entitled <<The poor performance of TMM on microRNA-Seq>> (Garmire and Subramaniam 2013), which was a response to our Divergent Views article entitled <<miRNA-seq normalization comparisons need improvement>> (Zhou et al. 2013). Using reproducible code examples, we showed that they incorrectly used our normalization method and highlighted additional concerns with their study. Here, I wish to debunk several untrue or misleading statements made by the authors (hereafter referred to as GS) in their response. Unlike GSs, my claims are supported by R code, citations and email correspondences. I finish by making a call for better practice.
[ { "created": "Mon, 27 May 2013 13:00:07 GMT", "version": "v1" }, { "created": "Mon, 10 Jun 2013 07:29:38 GMT", "version": "v2" } ]
2013-06-11
[ [ "Robinson", "Mark D.", "" ] ]
This letter is a response to a Divergent Views article entitled <<The poor performance of TMM on microRNA-Seq>> (Garmire and Subramaniam 2013), which was a response to our Divergent Views article entitled <<miRNA-seq normalization comparisons need improvement>> (Zhou et al. 2013). Using reproducible code examples, we showed that they incorrectly used our normalization method and highlighted additional concerns with their study. Here, I wish to debunk several untrue or misleading statements made by the authors (hereafter referred to as GS) in their response. Unlike GSs, my claims are supported by R code, citations and email correspondences. I finish by making a call for better practice.
2301.08559
Linn\'ea Gyllingberg
Linn\'ea Gyllingberg, Abeba Birhane, and David J.T. Sumpter
The Lost Art of Mathematical Modelling
null
null
null
null
q-bio.OT cs.LG nlin.AO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We provide a critique of mathematical biology in light of rapid developments in modern machine learning. We argue that out of the three modelling activities -- (1) formulating models; (2) analysing models; and (3) fitting or comparing models to data -- inherent to mathematical biology, researchers currently focus too much on activity (2) at the cost of (1). This trend, we propose, can be reversed by realising that any given biological phenomena can be modelled in an infinite number of different ways, through the adoption of an open/pluralistic approach. We explain the open approach using fish locomotion as a case study and illustrate some of the pitfalls -- universalism, creating models of models, etc. -- that hinder mathematical biology. We then ask how we might rediscover a lost art: that of creative mathematical modelling. This article is dedicated to the memory of Edmund Crampin.
[ { "created": "Thu, 19 Jan 2023 13:16:31 GMT", "version": "v1" }, { "created": "Fri, 2 Jun 2023 09:03:19 GMT", "version": "v2" } ]
2023-06-05
[ [ "Gyllingberg", "Linnéa", "" ], [ "Birhane", "Abeba", "" ], [ "Sumpter", "David J. T.", "" ] ]
We provide a critique of mathematical biology in light of rapid developments in modern machine learning. We argue that out of the three modelling activities -- (1) formulating models; (2) analysing models; and (3) fitting or comparing models to data -- inherent to mathematical biology, researchers currently focus too much on activity (2) at the cost of (1). This trend, we propose, can be reversed by realising that any given biological phenomena can be modelled in an infinite number of different ways, through the adoption of an open/pluralistic approach. We explain the open approach using fish locomotion as a case study and illustrate some of the pitfalls -- universalism, creating models of models, etc. -- that hinder mathematical biology. We then ask how we might rediscover a lost art: that of creative mathematical modelling. This article is dedicated to the memory of Edmund Crampin.
1304.7782
Geir Halnes
Geir Halnes, Ivar {\O}stby, Klas H. Pettersen, Stig W. Omholt, Gaute T. Einevoll
Electrodiffusive model for astrocytic and neuronal ion concentration dynamics
19 pages, 5 figures, 1 table (Equations 37 & 38 and the two first equations in Figure 2 were corrected May 30th 2013)
null
10.1371/journal.pcbi.1003386
null
q-bio.CB q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Electrical neural signalling typically takes place at the time-scale of milliseconds, and is typically modeled using the cable equation. This is a good approximation for processes when ionic concentrations vary little during the time course of a simulation. During periods of intense neural signalling, however, the local extracellular K+ concentration may increase by several millimolars. Clearance of excess K+ likely depends partly on diffusion in the extracellular space, partly on local uptake by- and intracellular transport within astrocytes. This process takes place at the time scale of seconds, and can not be modeled accurately without accounting for the spatiotemporal variations in ion concentrations. The work presented here consists of two main parts: First, we developed a general electrodiffusive formalism for modeling ion concentration dynamics in a one-dimensional geometry, including both an intra- and extracellular domain. The formalism was based on the Nernst-Planck equations. It ensures (i) consistency between the membrane potential and ion concentrations, (ii) global particle/charge conservation, and (iii) accounts for diffusion and concentration dependent variations in resistivities. Second, we applied the formalism to model how astrocytes exchange ions with the ECS, and identified the key astrocytic mechanisms involved in K+ removal from high concentration regions. We found that a local increase in extracellular K\textsuperscript{+} evoked a local depolarization of the astrocyte membrane, which at the same time (i) increased the local astrocytic uptake of K\textsuperscript{+}, (ii) suppressed extracellular transport of K+, (iii) increased transport of K+ within astrocytes, and (iv) facilitated astrocytic relase of K+ in extracellular low concentration regions. In summary, these mechanisms seem optimal for shielding the extracellular space from excess K+.
[ { "created": "Mon, 29 Apr 2013 20:02:20 GMT", "version": "v1" }, { "created": "Thu, 30 May 2013 12:56:08 GMT", "version": "v2" } ]
2014-03-05
[ [ "Halnes", "Geir", "" ], [ "Østby", "Ivar", "" ], [ "Pettersen", "Klas H.", "" ], [ "Omholt", "Stig W.", "" ], [ "Einevoll", "Gaute T.", "" ] ]
Electrical neural signalling typically takes place at the time-scale of milliseconds, and is typically modeled using the cable equation. This is a good approximation for processes when ionic concentrations vary little during the time course of a simulation. During periods of intense neural signalling, however, the local extracellular K+ concentration may increase by several millimolars. Clearance of excess K+ likely depends partly on diffusion in the extracellular space, partly on local uptake by- and intracellular transport within astrocytes. This process takes place at the time scale of seconds, and can not be modeled accurately without accounting for the spatiotemporal variations in ion concentrations. The work presented here consists of two main parts: First, we developed a general electrodiffusive formalism for modeling ion concentration dynamics in a one-dimensional geometry, including both an intra- and extracellular domain. The formalism was based on the Nernst-Planck equations. It ensures (i) consistency between the membrane potential and ion concentrations, (ii) global particle/charge conservation, and (iii) accounts for diffusion and concentration dependent variations in resistivities. Second, we applied the formalism to model how astrocytes exchange ions with the ECS, and identified the key astrocytic mechanisms involved in K+ removal from high concentration regions. We found that a local increase in extracellular K\textsuperscript{+} evoked a local depolarization of the astrocyte membrane, which at the same time (i) increased the local astrocytic uptake of K\textsuperscript{+}, (ii) suppressed extracellular transport of K+, (iii) increased transport of K+ within astrocytes, and (iv) facilitated astrocytic relase of K+ in extracellular low concentration regions. In summary, these mechanisms seem optimal for shielding the extracellular space from excess K+.
1212.1200
John Rhodes
Elizabeth S. Allman, John A. Rhodes, Amelia Taylor
A semialgebraic description of the general Markov model on phylogenetic trees
29 pages, 0 figures; Mittag-Leffler Institute, Spring 2011
null
null
null
q-bio.PE math.AG math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many of the stochastic models used in inference of phylogenetic trees from biological sequence data have polynomial parameterization maps. The image of such a map --- the collection of joint distributions for a model --- forms the model space. Since the parameterization is polynomial, the Zariski closure of the model space is an algebraic variety which is typically much larger than the model space, but has been usefully studied with algebraic methods. Of ultimate interest, however, is not the full variety, but only the model space. Here we develop complete semialgebraic descriptions of the model space arising from the k-state general Markov model on a tree, with slightly restricted parameters. Our approach depends upon both recently-formulated analogs of Cayley's hyperdeterminant, and the construction of certain quadratic forms from the joint distribution whose positive (semi-)definiteness encodes information about parameter values. We additionally investigate the use of Sturm sequences for obtaining similar results.
[ { "created": "Wed, 5 Dec 2012 23:03:29 GMT", "version": "v1" } ]
2012-12-07
[ [ "Allman", "Elizabeth S.", "" ], [ "Rhodes", "John A.", "" ], [ "Taylor", "Amelia", "" ] ]
Many of the stochastic models used in inference of phylogenetic trees from biological sequence data have polynomial parameterization maps. The image of such a map --- the collection of joint distributions for a model --- forms the model space. Since the parameterization is polynomial, the Zariski closure of the model space is an algebraic variety which is typically much larger than the model space, but has been usefully studied with algebraic methods. Of ultimate interest, however, is not the full variety, but only the model space. Here we develop complete semialgebraic descriptions of the model space arising from the k-state general Markov model on a tree, with slightly restricted parameters. Our approach depends upon both recently-formulated analogs of Cayley's hyperdeterminant, and the construction of certain quadratic forms from the joint distribution whose positive (semi-)definiteness encodes information about parameter values. We additionally investigate the use of Sturm sequences for obtaining similar results.
1610.07278
Oscar Garc\'ia
Oscar Garc\'ia
Cohort aggregation modelling for complex forest stands: Spruce-aspen mixtures in British Columbia
Accepted manuscript, to appear in Ecological Modelling
Ecological Modelling 343: 109-122, 2017
10.1016/j.ecolmodel.2016.10.020
null
q-bio.QM q-bio.PE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mixed-species growth models are needed as a synthesis of ecological knowledge and for guiding forest management. Individual-tree models have been commonly used, but the difficulties of reliably scaling from the individual to the stand level are often underestimated. Emergent properties and statistical issues limit their effectiveness. A more holistic modelling of aggregates at the whole stand level is a potentially attractive alternative. This work explores methodology for developing biologically consistent dynamic mixture models where the state is described by aggregate stand-level variables for species or age/size cohorts. The methods are demonstrated and tested with a two-cohort model for spruce-aspen mixtures named SAM. The models combine single-species submodels and submodels for resource partitioning among the cohorts. The partitioning allows for differences in competitive strength among species and size classes, and for complementarity effects. Height growth reduction in suppressed cohorts is also modelled. SAM fits well the available data, and exhibits behaviors consistent with current ecological knowledge. The general framework can be applied to any number of cohorts, and should be useful as a basis for modelling other mixed-species or uneven-aged stands.
[ { "created": "Mon, 24 Oct 2016 04:43:40 GMT", "version": "v1" }, { "created": "Tue, 25 Oct 2016 03:03:37 GMT", "version": "v2" } ]
2016-11-08
[ [ "García", "Oscar", "" ] ]
Mixed-species growth models are needed as a synthesis of ecological knowledge and for guiding forest management. Individual-tree models have been commonly used, but the difficulties of reliably scaling from the individual to the stand level are often underestimated. Emergent properties and statistical issues limit their effectiveness. A more holistic modelling of aggregates at the whole stand level is a potentially attractive alternative. This work explores methodology for developing biologically consistent dynamic mixture models where the state is described by aggregate stand-level variables for species or age/size cohorts. The methods are demonstrated and tested with a two-cohort model for spruce-aspen mixtures named SAM. The models combine single-species submodels and submodels for resource partitioning among the cohorts. The partitioning allows for differences in competitive strength among species and size classes, and for complementarity effects. Height growth reduction in suppressed cohorts is also modelled. SAM fits well the available data, and exhibits behaviors consistent with current ecological knowledge. The general framework can be applied to any number of cohorts, and should be useful as a basis for modelling other mixed-species or uneven-aged stands.
1310.0736
Liane Gabora
Liane Gabora
Toward a Theory of Creative Inklings
arXiv admin note: substantial text overlap with arXiv:1309.7414
In R. Ascott (Ed.), Art, technology, consciousness (pp. 159-164). Bristol UK: Intellect Press (2000)
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is perhaps not so baffling that we have the ability to develop, refine, and manifest a creative idea, once it has been conceived. But what sort of a system could spawn the initial seed of creativity from which an idea grows? This paper looks at how the mind is structured in such a way that we can experience a glimmer of insight or inkling of artistic inspiration.
[ { "created": "Sat, 28 Sep 2013 03:27:25 GMT", "version": "v1" }, { "created": "Fri, 5 Jul 2019 20:26:52 GMT", "version": "v2" } ]
2019-07-09
[ [ "Gabora", "Liane", "" ] ]
It is perhaps not so baffling that we have the ability to develop, refine, and manifest a creative idea, once it has been conceived. But what sort of a system could spawn the initial seed of creativity from which an idea grows? This paper looks at how the mind is structured in such a way that we can experience a glimmer of insight or inkling of artistic inspiration.
2209.11953
Baibhab Chatterjee
Baibhab Chatterjee, K Gaurav Kumar, Shulan Xiao, Gourab Barik, Krishna Jayant and Shreyas Sen
TD-BPQBC: A 1.8{\mu}W 5.5mm3 ADC-less Neural Implant SoC utilizing 13.2pJ/Sample Time-domain Bi-phasic Quasi-static Brain Communication
4 pages, 6 figures, presented in ESSCIRC 2022 conference
null
null
null
q-bio.NC eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Untethered miniaturized wireless neural sensor nodes with data transmission and energy harvesting capabilities call for circuit and system-level innovations to enable ultra-low energy deep implants for brain-machine interfaces. Realizing that the energy and size constraints of a neural implant motivate highly asymmetric system design (a small, low-power sensor and transmitter at the implant, with a relatively higher power receiver at a body-worn hub), we present Time-Domain Bi-Phasic Quasi-static Brain Communication (TD- BPQBC), offloading the burden of analog to digital conversion (ADC) and digital signal processing (DSP) to the receiver. The input analog signal is converted to time-domain pulse-width modulated (PWM) waveforms, and transmitted using the recently developed BPQBC method for reducing communication power in implants. The overall SoC consumes only 1.8{\mu}W power while sensing and communicating at 800kSps. The transmitter energy efficiency is only 1.1pJ/b, which is >30X better than the state-of-the-art, enabling a fully-electrical, energy-harvested, and connected in-brain sensor/stimulator node.
[ { "created": "Sat, 24 Sep 2022 08:09:18 GMT", "version": "v1" }, { "created": "Wed, 19 Oct 2022 19:41:56 GMT", "version": "v2" } ]
2022-10-21
[ [ "Chatterjee", "Baibhab", "" ], [ "Kumar", "K Gaurav", "" ], [ "Xiao", "Shulan", "" ], [ "Barik", "Gourab", "" ], [ "Jayant", "Krishna", "" ], [ "Sen", "Shreyas", "" ] ]
Untethered miniaturized wireless neural sensor nodes with data transmission and energy harvesting capabilities call for circuit and system-level innovations to enable ultra-low energy deep implants for brain-machine interfaces. Realizing that the energy and size constraints of a neural implant motivate highly asymmetric system design (a small, low-power sensor and transmitter at the implant, with a relatively higher power receiver at a body-worn hub), we present Time-Domain Bi-Phasic Quasi-static Brain Communication (TD- BPQBC), offloading the burden of analog to digital conversion (ADC) and digital signal processing (DSP) to the receiver. The input analog signal is converted to time-domain pulse-width modulated (PWM) waveforms, and transmitted using the recently developed BPQBC method for reducing communication power in implants. The overall SoC consumes only 1.8{\mu}W power while sensing and communicating at 800kSps. The transmitter energy efficiency is only 1.1pJ/b, which is >30X better than the state-of-the-art, enabling a fully-electrical, energy-harvested, and connected in-brain sensor/stimulator node.
1407.2488
Franz Baumdicker
Franz Baumdicker
The site frequency spectrum of dispensable genes
24 pages, 8 figures
null
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The differences between DNA-sequences within a population are the basis to infer the ancestral relationship of the individuals. Within the classical infinitely many sites model, it is possible to estimate the mutation rate based on the site frequency spectrum, which is comprised by the numbers $C_1,...,C_{n-1}$, where n is the sample size and $C_s$ is the number of site mutations (Single Nucleotide Polymorphisms, SNPs) which are seen in $s$ genomes. Classical results can be used to compare the observed site frequency spectrum with its neutral expectation, $E[C_s]= \theta_2/s$, where $\theta_2$ is the scaled site mutation rate. In this paper, we will relax the assumption of the infinitely many sites model that all individuals only carry homologous genetic material. Especially, it is today well-known that bacterial genomes have the ability to gain and lose genes, such that every single genome is a mosaic of genes, and genes are present and absent in a random fashion, giving rise to the dispensable genome. While this presence and absence has been modeled under neutral evolution within the infinitely many genes model in previous papers, we link presence and absence of genes with the numbers of site mutations seen within each gene. In this work we derive a formula for the expectation of the joint gene and site frequency spectrum, denotes $G_{k,s}$ the number of mutated sites occurring in exactly $s$ gene sequences, while the corresponding gene is present in exactly $k$ individuals. We show that standard estimators of $\theta_2$ for dispensable genes are biased and that the site frequency spectrum for dispensable genes differs from the classical result.
[ { "created": "Wed, 9 Jul 2014 14:19:54 GMT", "version": "v1" } ]
2014-07-10
[ [ "Baumdicker", "Franz", "" ] ]
The differences between DNA-sequences within a population are the basis to infer the ancestral relationship of the individuals. Within the classical infinitely many sites model, it is possible to estimate the mutation rate based on the site frequency spectrum, which is comprised by the numbers $C_1,...,C_{n-1}$, where n is the sample size and $C_s$ is the number of site mutations (Single Nucleotide Polymorphisms, SNPs) which are seen in $s$ genomes. Classical results can be used to compare the observed site frequency spectrum with its neutral expectation, $E[C_s]= \theta_2/s$, where $\theta_2$ is the scaled site mutation rate. In this paper, we will relax the assumption of the infinitely many sites model that all individuals only carry homologous genetic material. Especially, it is today well-known that bacterial genomes have the ability to gain and lose genes, such that every single genome is a mosaic of genes, and genes are present and absent in a random fashion, giving rise to the dispensable genome. While this presence and absence has been modeled under neutral evolution within the infinitely many genes model in previous papers, we link presence and absence of genes with the numbers of site mutations seen within each gene. In this work we derive a formula for the expectation of the joint gene and site frequency spectrum, denotes $G_{k,s}$ the number of mutated sites occurring in exactly $s$ gene sequences, while the corresponding gene is present in exactly $k$ individuals. We show that standard estimators of $\theta_2$ for dispensable genes are biased and that the site frequency spectrum for dispensable genes differs from the classical result.
1109.0615
Siddhartha Chakrabarty
Gaurav Pachpute and Siddhartha P. Chakrabarty
Optimal Therapy of Hepatitis C Dynamics and Sampling Based Analysis
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We examine two models for hepatitis C viral (HCV) dynamics, one for monotherapy with interferon (IFN) and the other for combination therapy with IFN and ribavirin. Optimal therapy for both the models is determined using the steepest gradient method, by defining an objective functional which minimizes the infected hepatocyte levels, virion population and the side-effects of the drug(s). The optimal therapy for both the models shows an initial period of high efficacy, followed by a gradual decline. The period of high efficacy coincides with a significant decrease in the infected hepatocyte levels as well as viral load, whereas the efficacy drops after liver regeneration through restored hepatocyte levels. The period of high efficacy is not altered significantly when the cost coefficients are varied, as long as the side effects are relatively small. This suggests a higher dependence of the optimal therapy on the model parameters in case of drugs with minimal side effects. We use the Latin hypercube sampling technique to randomly generate a large number of patient scenarios (i.e, model parameter sets) and study the dynamics of each set under the optimal therapy already determined. Results show an increase in the percentage of responders (as indicated by drop in viral load below detection levels) in case of combination therapy as compared to monotherapy. Statistical tests performed to study the correlations between sample parameters and the time required for the viral load to fall below detection level, show a strong monotonic correlation with the death rate of infected hepatocytes, identifying it to be an important factor in deciding individual drug regimens.
[ { "created": "Sat, 3 Sep 2011 11:17:16 GMT", "version": "v1" } ]
2011-09-06
[ [ "Pachpute", "Gaurav", "" ], [ "Chakrabarty", "Siddhartha P.", "" ] ]
We examine two models for hepatitis C viral (HCV) dynamics, one for monotherapy with interferon (IFN) and the other for combination therapy with IFN and ribavirin. Optimal therapy for both the models is determined using the steepest gradient method, by defining an objective functional which minimizes the infected hepatocyte levels, virion population and the side-effects of the drug(s). The optimal therapy for both the models shows an initial period of high efficacy, followed by a gradual decline. The period of high efficacy coincides with a significant decrease in the infected hepatocyte levels as well as viral load, whereas the efficacy drops after liver regeneration through restored hepatocyte levels. The period of high efficacy is not altered significantly when the cost coefficients are varied, as long as the side effects are relatively small. This suggests a higher dependence of the optimal therapy on the model parameters in case of drugs with minimal side effects. We use the Latin hypercube sampling technique to randomly generate a large number of patient scenarios (i.e, model parameter sets) and study the dynamics of each set under the optimal therapy already determined. Results show an increase in the percentage of responders (as indicated by drop in viral load below detection levels) in case of combination therapy as compared to monotherapy. Statistical tests performed to study the correlations between sample parameters and the time required for the viral load to fall below detection level, show a strong monotonic correlation with the death rate of infected hepatocytes, identifying it to be an important factor in deciding individual drug regimens.
2305.00165
Wei Xie
Keqi Wang, Wei Xie, Sarah W. Harcum
Metabolic Regulatory Network Kinetic Modeling with Multiple Isotopic Tracers for iPSCs
26 pages, 16 figures
null
null
null
q-bio.MN q-bio.CB
http://creativecommons.org/licenses/by/4.0/
The rapidly expanding market for regenerative medicines and cell therapies highlights the need to advance the understanding of cellular metabolisms and improve the prediction of cultivation production process for human induced pluripotent stem cells (iPSCs). In this paper, a metabolic kinetic model was developed to characterize underlying mechanisms of iPSC culture process, which can predict cell response to environmental perturbation and support process control. This model focuses on the central carbon metabolic network, including glycolysis, pentose phosphate pathway (PPP), tricarboxylic acid (TCA) cycle, and amino acid metabolism, which plays a crucial role to support iPSC proliferation. Heterogeneous measures of extracellular metabolites and multiple isotopic tracers collected under multiple conditions were used to learn metabolic regulatory mechanisms. Systematic cross-validation confirmed the model's performance in terms of providing reliable predictions on cellular metabolism and culture process dynamics under various culture conditions. Thus, the developed mechanistic kinetic model can support process control strategies to strategically select optimal cell culture conditions at different times, ensure cell product functionality, and facilitate large-scale manufacturing of regenerative medicines and cell therapies.
[ { "created": "Sat, 29 Apr 2023 04:12:42 GMT", "version": "v1" }, { "created": "Thu, 26 Oct 2023 00:59:57 GMT", "version": "v2" } ]
2023-10-27
[ [ "Wang", "Keqi", "" ], [ "Xie", "Wei", "" ], [ "Harcum", "Sarah W.", "" ] ]
The rapidly expanding market for regenerative medicines and cell therapies highlights the need to advance the understanding of cellular metabolisms and improve the prediction of cultivation production process for human induced pluripotent stem cells (iPSCs). In this paper, a metabolic kinetic model was developed to characterize underlying mechanisms of iPSC culture process, which can predict cell response to environmental perturbation and support process control. This model focuses on the central carbon metabolic network, including glycolysis, pentose phosphate pathway (PPP), tricarboxylic acid (TCA) cycle, and amino acid metabolism, which plays a crucial role to support iPSC proliferation. Heterogeneous measures of extracellular metabolites and multiple isotopic tracers collected under multiple conditions were used to learn metabolic regulatory mechanisms. Systematic cross-validation confirmed the model's performance in terms of providing reliable predictions on cellular metabolism and culture process dynamics under various culture conditions. Thus, the developed mechanistic kinetic model can support process control strategies to strategically select optimal cell culture conditions at different times, ensure cell product functionality, and facilitate large-scale manufacturing of regenerative medicines and cell therapies.
1702.00633
James Barrett
James E. Barrett, Andrew Feber, Javier Herrero, Miljana Tanic, Gareth Wilson, Charles Swanton, Stephan Beck
Quantification of tumour evolution and heterogeneity via Bayesian epiallele detection
null
null
null
null
q-bio.QM math.ST q-bio.GN stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Epigenetic heterogeneity within a tumour can play an important role in tumour evolution and the emergence of resistance to treatment. It is increasingly recognised that the study of DNA methylation (DNAm) patterns along the genome -- so-called `epialleles' -- offers greater insight into epigenetic dynamics than conventional analyses which examine DNAm marks individually. Results: We have developed a Bayesian model to infer which epialleles are present in multiple regions of the same tumour. We apply our method to reduced representation bisulfite sequencing (RRBS) data from multiple regions of one lung cancer tumour and a matched normal sample. The model borrows information from all tumour regions to leverage greater statistical power. The total number of epialleles, the epiallele DNAm patterns, and a noise hyperparameter are all automatically inferred from the data. Uncertainty as to which epiallele an observed sequencing read originated from is explicitly incorporated by marginalising over the appropriate posterior densities. The degree to which tumour samples are contaminated with normal tissue can be estimated and corrected for. By tracing the distribution of epialleles throughout the tumour we can infer the phylogenetic history of the tumour, identify epialleles that differ between normal and cancer tissue, and define a measure of global epigenetic disorder.
[ { "created": "Thu, 2 Feb 2017 12:00:08 GMT", "version": "v1" }, { "created": "Mon, 20 Feb 2017 16:26:20 GMT", "version": "v2" } ]
2017-02-21
[ [ "Barrett", "James E.", "" ], [ "Feber", "Andrew", "" ], [ "Herrero", "Javier", "" ], [ "Tanic", "Miljana", "" ], [ "Wilson", "Gareth", "" ], [ "Swanton", "Charles", "" ], [ "Beck", "Stephan", "" ] ]
Motivation: Epigenetic heterogeneity within a tumour can play an important role in tumour evolution and the emergence of resistance to treatment. It is increasingly recognised that the study of DNA methylation (DNAm) patterns along the genome -- so-called `epialleles' -- offers greater insight into epigenetic dynamics than conventional analyses which examine DNAm marks individually. Results: We have developed a Bayesian model to infer which epialleles are present in multiple regions of the same tumour. We apply our method to reduced representation bisulfite sequencing (RRBS) data from multiple regions of one lung cancer tumour and a matched normal sample. The model borrows information from all tumour regions to leverage greater statistical power. The total number of epialleles, the epiallele DNAm patterns, and a noise hyperparameter are all automatically inferred from the data. Uncertainty as to which epiallele an observed sequencing read originated from is explicitly incorporated by marginalising over the appropriate posterior densities. The degree to which tumour samples are contaminated with normal tissue can be estimated and corrected for. By tracing the distribution of epialleles throughout the tumour we can infer the phylogenetic history of the tumour, identify epialleles that differ between normal and cancer tissue, and define a measure of global epigenetic disorder.
2002.03638
Axel Brandenburg
Axel Brandenburg
Piecewise quadratic growth during the 2019 novel coronavirus epidemic
9 pages, 14 figures, 2 tables, published in Infectious Disease Modelling
Infectious Disease Modelling 5, 681-690 (2020)
10.1016/j.idm.2020.08.014
Nordita-2020-015
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
The temporal growth in the number of deaths in the COVID-19 epidemic is subexponential. Here we show that a piecewise quadratic law provides an excellent fit during the thirty days after the first three fatalities on January 20 and later since the end of March 2020. There is also a brief intermediate period of exponential growth. During the second quadratic growth phase, the characteristic time of the growth is about eight times shorter than in the beginning, which can be understood as the occurrence of separate hotspots. Quadratic behavior can be motivated by peripheral growth when further spreading occurs only on the outskirts of an infected region. We also study numerical solutions of a simple epidemic model, where the spatial extend of the system is taken into account. To model the delayed onset outside China together with the early one in China within a single model with minimal assumptions, we adopt an initial condition of several hotspots, of which one reaches saturation much earlier than the others. At each site, quadratic growth commences when the local number of infections has reached a certain saturation level. The total number of deaths does then indeed follow a piecewise quadratic behavior.
[ { "created": "Mon, 10 Feb 2020 10:36:26 GMT", "version": "v1" }, { "created": "Fri, 14 Feb 2020 18:57:22 GMT", "version": "v2" }, { "created": "Mon, 20 Apr 2020 15:41:11 GMT", "version": "v3" }, { "created": "Mon, 21 Sep 2020 06:37:34 GMT", "version": "v4" } ]
2020-09-22
[ [ "Brandenburg", "Axel", "" ] ]
The temporal growth in the number of deaths in the COVID-19 epidemic is subexponential. Here we show that a piecewise quadratic law provides an excellent fit during the thirty days after the first three fatalities on January 20 and later since the end of March 2020. There is also a brief intermediate period of exponential growth. During the second quadratic growth phase, the characteristic time of the growth is about eight times shorter than in the beginning, which can be understood as the occurrence of separate hotspots. Quadratic behavior can be motivated by peripheral growth when further spreading occurs only on the outskirts of an infected region. We also study numerical solutions of a simple epidemic model, where the spatial extend of the system is taken into account. To model the delayed onset outside China together with the early one in China within a single model with minimal assumptions, we adopt an initial condition of several hotspots, of which one reaches saturation much earlier than the others. At each site, quadratic growth commences when the local number of infections has reached a certain saturation level. The total number of deaths does then indeed follow a piecewise quadratic behavior.
2302.01140
Martin Johnsson
M. Johnsson (Swedish University of Agricultural Sciences)
The big challenge for livestock genomics is to make sequence data pay
null
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
This paper will argue that one of the biggest challenges for livestock genomics is to make whole-genome sequencing and functional genomics applicable to breeding practice. It discusses potential explanations for why it is so difficult to consistently improve the accuracy of genomic prediction by means of whole-genome sequence data, and three potential attacks on the problem.
[ { "created": "Thu, 2 Feb 2023 14:55:51 GMT", "version": "v1" }, { "created": "Fri, 3 Feb 2023 11:41:42 GMT", "version": "v2" }, { "created": "Mon, 5 Jun 2023 10:57:21 GMT", "version": "v3" }, { "created": "Mon, 24 Jul 2023 09:32:42 GMT", "version": "v4" } ]
2023-07-25
[ [ "Johnsson", "M.", "", "Swedish University of Agricultural Sciences" ] ]
This paper will argue that one of the biggest challenges for livestock genomics is to make whole-genome sequencing and functional genomics applicable to breeding practice. It discusses potential explanations for why it is so difficult to consistently improve the accuracy of genomic prediction by means of whole-genome sequence data, and three potential attacks on the problem.
1903.12155
Akihiko Akao
Akihiko Akao, Sho Shirasaka, Yasuhiko Jimbo, Bard Ermentrout and Kiyoshi Kotani
Theta-gamma cross-frequency coupling enables covariance between distant brain regions
13 pages, 4 figures
null
null
null
q-bio.NC nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cross-frequency coupling (CFC) is thought to play an important role in communication across distant brain regions. However, neither the mechanism of its generation nor the influence on the underlying spiking dynamics is well understood. Here, we investigate the dynamics of two interacting distant neuronal modules coupled by inter-regional long-range connections. Each neuronal module comprises an excitatory and inhibitory population of quadratic integrate-and-fire neurons connected locally with conductance-based synapses. The two modules are coupled reciprocally with delays that represent the long-range conduction time. We applied the Ott-Antonsen ansatz to reduce the spiking dynamics to the corresponding mean field equations as a small set of delay differential equations. Bifurcation analysis on these mean field equations shows inter-regional conduction delay is sufficient to produce CFC via a torus bifurcation. Spike correlation analysis during the CFC revealed that several local clusters exhibit synchronized firing in gamma-band frequencies. These clusters exhibit locally decorrelated firings between the cluster pairs within the same population. In contrast, the clusters exhibit long-range gamma-band cross-covariance between the distant clusters that have similar firing frequency. The interactions of the different gamma frequencies produce a beat leading to population-level CFC. We analyzed spike counts in relation to the phases of the macroscopic fast and slow oscillations and found population spike counts vary with respect to macroscopic phases. Such firing phase preference accompanies a phase window with high spike count and low Fano factor, which is suitable for a population rate code. Our work suggests the inter-regional conduction delay plays a significant role in the emergence of CFC and the underlying spiking dynamics may support long-range communication and neural coding.
[ { "created": "Thu, 28 Mar 2019 17:42:05 GMT", "version": "v1" } ]
2019-03-29
[ [ "Akao", "Akihiko", "" ], [ "Shirasaka", "Sho", "" ], [ "Jimbo", "Yasuhiko", "" ], [ "Ermentrout", "Bard", "" ], [ "Kotani", "Kiyoshi", "" ] ]
Cross-frequency coupling (CFC) is thought to play an important role in communication across distant brain regions. However, neither the mechanism of its generation nor the influence on the underlying spiking dynamics is well understood. Here, we investigate the dynamics of two interacting distant neuronal modules coupled by inter-regional long-range connections. Each neuronal module comprises an excitatory and inhibitory population of quadratic integrate-and-fire neurons connected locally with conductance-based synapses. The two modules are coupled reciprocally with delays that represent the long-range conduction time. We applied the Ott-Antonsen ansatz to reduce the spiking dynamics to the corresponding mean field equations as a small set of delay differential equations. Bifurcation analysis on these mean field equations shows inter-regional conduction delay is sufficient to produce CFC via a torus bifurcation. Spike correlation analysis during the CFC revealed that several local clusters exhibit synchronized firing in gamma-band frequencies. These clusters exhibit locally decorrelated firings between the cluster pairs within the same population. In contrast, the clusters exhibit long-range gamma-band cross-covariance between the distant clusters that have similar firing frequency. The interactions of the different gamma frequencies produce a beat leading to population-level CFC. We analyzed spike counts in relation to the phases of the macroscopic fast and slow oscillations and found population spike counts vary with respect to macroscopic phases. Such firing phase preference accompanies a phase window with high spike count and low Fano factor, which is suitable for a population rate code. Our work suggests the inter-regional conduction delay plays a significant role in the emergence of CFC and the underlying spiking dynamics may support long-range communication and neural coding.
1603.06335
Philippe Robert S.
Marie Doumic and Sarah Eugene and Philippe Robert
Asymptotics of Stochastic Protein Assembly Models
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Self-assembly of proteins is a biological phenomenon which gives rise to spontaneous formation of amyloid fibrils or polymers. The starting point of this phase, called nucleation exhibits an important variability among replicated experiments.To analyse the stochastic nature of this phenomenon, one of the simplest models considers two populations of chemical components: monomers and polymerised monomers. Initially there are only monomers. There are two reactions for the polymerization of a monomer: either two monomers collide to combine into two polymerised monomers or a monomer is polymerised after the encounter of a polymerised monomer. It turns out that this simple model does not explain completely the variability observed in the experiments. This paper investigates extensions of this model to take into account other mechanisms of the polymerization process that may have impact an impact on fluctuations.The first variant consists in introducing a preliminary conformation step to take into account the biological fact that, before being polymerised, a monomer has two states, regular or misfolded. Only misfolded monomers can be polymerised so that the fluctuations of the number of misfolded monomers can be also a source of variability of the number of polymerised monomers. The second variant represents the reaction rate $\alpha$ of spontaneous formation of a polymer as of the order of $N^{-\nu}$, with $\nu$ some positive constant. First and second order results for the starting instant of nucleation are derived from these limit theorems. The proofs of the results rely on a study of a stochastic averaging principle for a model related to an Ehrenfest urn model, and also on a scaling analysis of a population model.
[ { "created": "Mon, 21 Mar 2016 06:37:01 GMT", "version": "v1" } ]
2016-03-22
[ [ "Doumic", "Marie", "" ], [ "Eugene", "Sarah", "" ], [ "Robert", "Philippe", "" ] ]
Self-assembly of proteins is a biological phenomenon which gives rise to spontaneous formation of amyloid fibrils or polymers. The starting point of this phase, called nucleation exhibits an important variability among replicated experiments.To analyse the stochastic nature of this phenomenon, one of the simplest models considers two populations of chemical components: monomers and polymerised monomers. Initially there are only monomers. There are two reactions for the polymerization of a monomer: either two monomers collide to combine into two polymerised monomers or a monomer is polymerised after the encounter of a polymerised monomer. It turns out that this simple model does not explain completely the variability observed in the experiments. This paper investigates extensions of this model to take into account other mechanisms of the polymerization process that may have impact an impact on fluctuations.The first variant consists in introducing a preliminary conformation step to take into account the biological fact that, before being polymerised, a monomer has two states, regular or misfolded. Only misfolded monomers can be polymerised so that the fluctuations of the number of misfolded monomers can be also a source of variability of the number of polymerised monomers. The second variant represents the reaction rate $\alpha$ of spontaneous formation of a polymer as of the order of $N^{-\nu}$, with $\nu$ some positive constant. First and second order results for the starting instant of nucleation are derived from these limit theorems. The proofs of the results rely on a study of a stochastic averaging principle for a model related to an Ehrenfest urn model, and also on a scaling analysis of a population model.
1602.07183
John Helliwell R
Simon W. M. Tanley, Loes M. J. Kroon-Batenburg, Antoine M. M. Schreurs and John R. Helliwell
Re-refinement of 4xan: hen egg white lysozyme with carboplatin in sodium bromide solution including details of solute, solvent, ion and split occupancy amino acid electron density evidence
43 pages
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A re-refinement of 4xan, hen egg white lysozyme (HEWL) with carboplatin crystallised in NaBr solution, has been made (Tanley et al 2016). This follows our Response article (Tanley et al 2015) to the Critique article of Shabalin et al 2015, suggesting the need for corrections to some solute molecule interpretations of electron density in 4xan and removal of an organic moiety as a ligand to the platinum ion coordinated to His15. We note Shabalin et al (2015) model of a chlorine in that density and a close by bromine at partial occupancy to explain the shape. However, as the bromide concentration is in huge excess over chloride (by 20 fold), we think that the 4yem Shabalin et al 2015 interpretation highly unlikely, but nevertheless we still cannot offer an explanation for that shape, confirming our earlier analysis described in Tanley et al (2014). Following Shabalin et al (2015) reprocessing of the raw diffraction data for 4g4a, we also redid the diffraction data processing for 4xan to a higher resolution using EVAL (Schreurs et al 2010) concluding in favour of 1.3 Angstrom as the resolution limit and which is the basis for our revised PDB file for 4xan (5HMJ). It is very interesting that there is extra X-ray diffraction data from 1.47 to 1.30 Angstrom resolution e.g. with <I/sigma(I)> =0.39 and CC1/2 = 0.181 in the final shell (1.30 to 1.322 Angstrom). In this arXiv article we document in detail our different solvent and split occupancy side chain electron density interpretations as evidence for our statement of approach in our Response article (Tanley et al 2015). Our critical re-examination includes comparisons based on the 4xan diffraction data images reprocessing with three different software packages so as to evaluate the possibility of variations in electron density interpretations due to that. Overall our finalised model (PDB code 5HMJ) is now improved over 4xan.
[ { "created": "Tue, 23 Feb 2016 15:04:24 GMT", "version": "v1" } ]
2016-02-24
[ [ "Tanley", "Simon W. M.", "" ], [ "Kroon-Batenburg", "Loes M. J.", "" ], [ "Schreurs", "Antoine M. M.", "" ], [ "Helliwell", "John R.", "" ] ]
A re-refinement of 4xan, hen egg white lysozyme (HEWL) with carboplatin crystallised in NaBr solution, has been made (Tanley et al 2016). This follows our Response article (Tanley et al 2015) to the Critique article of Shabalin et al 2015, suggesting the need for corrections to some solute molecule interpretations of electron density in 4xan and removal of an organic moiety as a ligand to the platinum ion coordinated to His15. We note Shabalin et al (2015) model of a chlorine in that density and a close by bromine at partial occupancy to explain the shape. However, as the bromide concentration is in huge excess over chloride (by 20 fold), we think that the 4yem Shabalin et al 2015 interpretation highly unlikely, but nevertheless we still cannot offer an explanation for that shape, confirming our earlier analysis described in Tanley et al (2014). Following Shabalin et al (2015) reprocessing of the raw diffraction data for 4g4a, we also redid the diffraction data processing for 4xan to a higher resolution using EVAL (Schreurs et al 2010) concluding in favour of 1.3 Angstrom as the resolution limit and which is the basis for our revised PDB file for 4xan (5HMJ). It is very interesting that there is extra X-ray diffraction data from 1.47 to 1.30 Angstrom resolution e.g. with <I/sigma(I)> =0.39 and CC1/2 = 0.181 in the final shell (1.30 to 1.322 Angstrom). In this arXiv article we document in detail our different solvent and split occupancy side chain electron density interpretations as evidence for our statement of approach in our Response article (Tanley et al 2015). Our critical re-examination includes comparisons based on the 4xan diffraction data images reprocessing with three different software packages so as to evaluate the possibility of variations in electron density interpretations due to that. Overall our finalised model (PDB code 5HMJ) is now improved over 4xan.
2205.12432
Nana Wei
Nana Wei, Yating Nie, Lin Liu, Xiaoqi Zheng, Hua-Jun Wu4
Secuer: ultrafast, scalable and accurate clustering of single-cell RNA-seq data
null
null
10.1371/journal.pcbi.1010753
null
q-bio.QM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identifying cell clusters is a critical step for single-cell transcriptomics study. Despite the numerous clustering tools developed recently, the rapid growth of scRNA-seq volumes prompts for a more (computationally) efficient clustering method. Here, we introduce Secuer, a Scalable and Efficient speCtral clUstERing algorithm for scRNA-seq data. By employing an anchor-based bipartite graph representation algorithm, Secuer enjoys reduced runtime and memory usage by orders of magnitude, especially for ultra-large datasets profiling over 1 or even 10 million cells. Meanwhile, Secuer also achieves better or comparable accuracy than competing methods in small and moderate benchmark datasets. Furthermore, we showcase that Secuer can also serve as a building block for a new consensus clustering method, Secuer-consensus, which again greatly improves the runtime and scalability of state-of-the-art consensus clustering methods while also maintaining the accuracy. Overall, Secuer is a versatile, accurate, and scalable clustering framework suitable for small to ultra-large single-cell clustering tasks.
[ { "created": "Wed, 25 May 2022 01:40:41 GMT", "version": "v1" }, { "created": "Thu, 7 Jul 2022 13:27:03 GMT", "version": "v2" } ]
2023-01-11
[ [ "Wei", "Nana", "" ], [ "Nie", "Yating", "" ], [ "Liu", "Lin", "" ], [ "Zheng", "Xiaoqi", "" ], [ "Wu4", "Hua-Jun", "" ] ]
Identifying cell clusters is a critical step for single-cell transcriptomics study. Despite the numerous clustering tools developed recently, the rapid growth of scRNA-seq volumes prompts for a more (computationally) efficient clustering method. Here, we introduce Secuer, a Scalable and Efficient speCtral clUstERing algorithm for scRNA-seq data. By employing an anchor-based bipartite graph representation algorithm, Secuer enjoys reduced runtime and memory usage by orders of magnitude, especially for ultra-large datasets profiling over 1 or even 10 million cells. Meanwhile, Secuer also achieves better or comparable accuracy than competing methods in small and moderate benchmark datasets. Furthermore, we showcase that Secuer can also serve as a building block for a new consensus clustering method, Secuer-consensus, which again greatly improves the runtime and scalability of state-of-the-art consensus clustering methods while also maintaining the accuracy. Overall, Secuer is a versatile, accurate, and scalable clustering framework suitable for small to ultra-large single-cell clustering tasks.
1804.04428
No\"el Malod-Dognin Ph.D
Noel Malod-Dognin and Natasa Przulj
Functional geometry of protein-protein interaction networks
16 pages, 11 figures
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Protein-protein interactions (PPIs) are usually modelled as networks. These networks have extensively been studied using graphlets, small induced subgraphs capturing the local wiring patterns around nodes in networks. They revealed that proteins involved in similar functions tend to be similarly wired. However, such simple models can only represent pairwise relationships and cannot fully capture the higher-order organization of protein interactions, including protein complexes. Results: To model the multi-sale organization of these complex biological systems, we utilize simplicial complexes from computational geometry. The question is how to mine these new representations of PPI networks to reveal additional biological information. To address this, we define simplets, a generalization of graphlets to simplicial complexes. By using simplets, we define a sensitive measure of similarity between simplicial complex network representations that allows for clustering them according to their data types better than clustering them by using other state-of-the-art measures, e.g., spectral distance, or facet distribution distance. We model human and baker's yeast PPI networks as simplicial complexes that capture PPIs and protein complexes as simplices. On these models, we show that our newly introduced simplet-based methods cluster proteins by function better than the clustering methods that use the standard PPI networks, uncovering the new underlying functional organization of the cell. We demonstrate the existence of the functional geometry in the PPI data and the superiority of our simplet-based methods to effectively mine for new biological information hidden in the complexity of the higher order organization of PPI networks.
[ { "created": "Thu, 12 Apr 2018 11:08:05 GMT", "version": "v1" } ]
2018-04-13
[ [ "Malod-Dognin", "Noel", "" ], [ "Przulj", "Natasa", "" ] ]
Motivation: Protein-protein interactions (PPIs) are usually modelled as networks. These networks have extensively been studied using graphlets, small induced subgraphs capturing the local wiring patterns around nodes in networks. They revealed that proteins involved in similar functions tend to be similarly wired. However, such simple models can only represent pairwise relationships and cannot fully capture the higher-order organization of protein interactions, including protein complexes. Results: To model the multi-sale organization of these complex biological systems, we utilize simplicial complexes from computational geometry. The question is how to mine these new representations of PPI networks to reveal additional biological information. To address this, we define simplets, a generalization of graphlets to simplicial complexes. By using simplets, we define a sensitive measure of similarity between simplicial complex network representations that allows for clustering them according to their data types better than clustering them by using other state-of-the-art measures, e.g., spectral distance, or facet distribution distance. We model human and baker's yeast PPI networks as simplicial complexes that capture PPIs and protein complexes as simplices. On these models, we show that our newly introduced simplet-based methods cluster proteins by function better than the clustering methods that use the standard PPI networks, uncovering the new underlying functional organization of the cell. We demonstrate the existence of the functional geometry in the PPI data and the superiority of our simplet-based methods to effectively mine for new biological information hidden in the complexity of the higher order organization of PPI networks.
2103.07233
Mar\'ia Vallet-Regi
Marina Martinez-Carmona, Isabel Izquierdo-Barba, Montserrat Colilla, Maria Vallet-Regi
Concanavalin A-targeted mesoporous silica nanoparticles for infection treatment
27 pages, 9 figures
Acta Biomaterialia. 96, 547-556 (2019)
10.1016/j.actbio.2019.07.001
null
q-bio.TO physics.bio-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
The ability of bacteria to form biofilms hinders any conventional treatment against chronic infections and has serious socio-economic implications. In this sense, a nanocarrier capable of overcoming the barrier of the mucopolysaccharide matrix of the biofilm and releasing its loaded-antibiotic within would be desirable. Herein, a new nanosystem based on levofloxacin (LEVO)-loaded mesoporous silica nanoparticles (MSNs) decorated with lectin Concanavalin A (ConA) has been developed. The presence of ConA promotes its internalization into the biofilm matrix, which increases the antimicrobial efficacy of the antibiotic hosted within the mesopores. This nanodevice is envisioned as a promising alternative to conventional infection treatments by improving the antimicrobial efficacy and reducing side effects.
[ { "created": "Fri, 12 Mar 2021 12:20:28 GMT", "version": "v1" } ]
2021-03-15
[ [ "Martinez-Carmona", "Marina", "" ], [ "Izquierdo-Barba", "Isabel", "" ], [ "Colilla", "Montserrat", "" ], [ "Vallet-Regi", "Maria", "" ] ]
The ability of bacteria to form biofilms hinders any conventional treatment against chronic infections and has serious socio-economic implications. In this sense, a nanocarrier capable of overcoming the barrier of the mucopolysaccharide matrix of the biofilm and releasing its loaded-antibiotic within would be desirable. Herein, a new nanosystem based on levofloxacin (LEVO)-loaded mesoporous silica nanoparticles (MSNs) decorated with lectin Concanavalin A (ConA) has been developed. The presence of ConA promotes its internalization into the biofilm matrix, which increases the antimicrobial efficacy of the antibiotic hosted within the mesopores. This nanodevice is envisioned as a promising alternative to conventional infection treatments by improving the antimicrobial efficacy and reducing side effects.
1405.0114
Wolfgang Kaisers
Wolfgang Kaisers and Holger Schwender and Heiner Schaal
Hierarchical clustering of DNA k-mer counts in RNA-seq fastq files reveals batch effects
5 pages, 6 figures
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Batch effects, artificial sources of variation due to experimental design, are a widespread phenomenon in high throughput data. Therefore, mechanisms for detection of batch effects are needed requiring comparison of multiple samples. We apply hierarchical clustering (HC) on DNA k-mer counts of multiple RNA-seq derived Fastq files. Ideally, HC generated trees reflect experimental treatment groups and thus may indicate experimental effects, but clustering of preparation groups indicates the presence of batch effects. In order to provide a simple applicable tool we implemented sequential analysis of Fastq reads with low memory usage in an R package (seqTools) available on Bioconductor. DNA k-mer counts were analysed on 61 Fastq files containing RNA-seq data from two cell types (dermal fibroblasts and Jurkat cells) sequenced on 8 different Illumina Flowcells. Results: Pairwise comparison of all Flowcells with hierarchical clustering revealed strong Flowcell based tree separation in 6 (21 %) and detectable Flowcell based clustering in 17 (60.7 %) of 28 Flowcell comparisons. In our samples, batch effects were also present in reads mapped to the human genome. Filtering reads for high quality (Phred >30) did not remove the batch effects. Conclusions: Hierarchical clustering of DNA k-mer counts provides a quality criterion and an unspecific diagnostic tool for RNA-seq experiments.
[ { "created": "Thu, 1 May 2014 08:53:46 GMT", "version": "v1" }, { "created": "Sat, 3 May 2014 12:25:21 GMT", "version": "v2" }, { "created": "Tue, 6 May 2014 15:16:50 GMT", "version": "v3" }, { "created": "Tue, 13 May 2014 17:41:39 GMT", "version": "v4" }, { "crea...
2017-07-24
[ [ "Kaisers", "Wolfgang", "" ], [ "Schwender", "Holger", "" ], [ "Schaal", "Heiner", "" ] ]
Batch effects, artificial sources of variation due to experimental design, are a widespread phenomenon in high throughput data. Therefore, mechanisms for detection of batch effects are needed requiring comparison of multiple samples. We apply hierarchical clustering (HC) on DNA k-mer counts of multiple RNA-seq derived Fastq files. Ideally, HC generated trees reflect experimental treatment groups and thus may indicate experimental effects, but clustering of preparation groups indicates the presence of batch effects. In order to provide a simple applicable tool we implemented sequential analysis of Fastq reads with low memory usage in an R package (seqTools) available on Bioconductor. DNA k-mer counts were analysed on 61 Fastq files containing RNA-seq data from two cell types (dermal fibroblasts and Jurkat cells) sequenced on 8 different Illumina Flowcells. Results: Pairwise comparison of all Flowcells with hierarchical clustering revealed strong Flowcell based tree separation in 6 (21 %) and detectable Flowcell based clustering in 17 (60.7 %) of 28 Flowcell comparisons. In our samples, batch effects were also present in reads mapped to the human genome. Filtering reads for high quality (Phred >30) did not remove the batch effects. Conclusions: Hierarchical clustering of DNA k-mer counts provides a quality criterion and an unspecific diagnostic tool for RNA-seq experiments.
0909.4239
Rui Dilao
Rui Dil\~ao, Daniele Muraro
mRNA diffusion explains protein gradients in \textit{Drosophila} early development
14 pages, 2 figures
null
null
null
q-bio.QM q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a new model describing the production and the establishment of the stable gradient of the Bicoid protein along the antero-posterior axis of the embryo of \textit{Drosophila}. In this model, we consider that \textit{bicoid} mRNA diffuses along the antero-posterior axis of the embryo and the protein is produced in the ribosomes localized near the syncytial nuclei. Bicoid protein stays localized near the syncytial nuclei as observed in experiments. We calibrate the parameters of the mathematical model with experimental data taken during the cleavage stages 11 to 14 of the developing embryo of \textit{Drosophila}. We obtain good agreement between the experimental and the model gradients, with relative errors in the range 5-8%. The inferred diffusion coefficient of \textit{bicoid} mRNA is in the range 4.6\times 10^{-12}-1.5\times 10^{-11}m^2s^{-1}, in agreement with the theoretical predictions and experimental measurements for the diffusion of macromolecules in the cytoplasm. We show that the model based on the mRNA diffusion hypothesis is consistent with the known observational data, supporting the recent experimental findings of the gradient of \textit{bicoid} mRNA in \textit{Drosophila} [Spirov et al. (2009) Development 136:605-614].
[ { "created": "Wed, 23 Sep 2009 15:58:57 GMT", "version": "v1" } ]
2009-09-24
[ [ "Dilão", "Rui", "" ], [ "Muraro", "Daniele", "" ] ]
We propose a new model describing the production and the establishment of the stable gradient of the Bicoid protein along the antero-posterior axis of the embryo of \textit{Drosophila}. In this model, we consider that \textit{bicoid} mRNA diffuses along the antero-posterior axis of the embryo and the protein is produced in the ribosomes localized near the syncytial nuclei. Bicoid protein stays localized near the syncytial nuclei as observed in experiments. We calibrate the parameters of the mathematical model with experimental data taken during the cleavage stages 11 to 14 of the developing embryo of \textit{Drosophila}. We obtain good agreement between the experimental and the model gradients, with relative errors in the range 5-8%. The inferred diffusion coefficient of \textit{bicoid} mRNA is in the range 4.6\times 10^{-12}-1.5\times 10^{-11}m^2s^{-1}, in agreement with the theoretical predictions and experimental measurements for the diffusion of macromolecules in the cytoplasm. We show that the model based on the mRNA diffusion hypothesis is consistent with the known observational data, supporting the recent experimental findings of the gradient of \textit{bicoid} mRNA in \textit{Drosophila} [Spirov et al. (2009) Development 136:605-614].
1608.03425
Haiguang Wen
Haiguang Wen, Junxing Shi, Yizhen Zhang, Kun-Han Lu, Jiayue Cao, Zhongming Liu
Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision
27 pages, 10 figures, 1 table
Cerebral Cortex. 2017 pp.1-25
10.1093/cercor/bhx268
null
q-bio.NC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Convolutional neural network (CNN) driven by image recognition has been shown to be able to explain cortical responses to static pictures at ventral-stream areas. Here, we further showed that such CNN could reliably predict and decode functional magnetic resonance imaging data from humans watching natural movies, despite its lack of any mechanism to account for temporal dynamics or feedback processing. Using separate data, encoding and decoding models were developed and evaluated for describing the bi-directional relationships be-tween the CNN and the brain. Through the encoding models, the CNN-predicted areas covered not only the ventral stream, but also the dorsal stream, albe-it to a lesser degree; single-voxel response was visualized as the specific pixel pattern that drove the response, revealing the distinct representation of individual cortical location; cortical activation was synthesized from natural images with high-throughput to map category representation, con-trast, and selectivity. Through the decoding models, fMRI signals were directly decoded to estimate the feature representations in both visual and semantic spaces, for direct visual reconstruction and seman-tic categorization, respectively. These results cor-roborate, generalize, and extend previous findings, and highlight the value of using deep learning, as an all-in-one model of the visual cortex, to understand and decode natural vision.
[ { "created": "Thu, 11 Aug 2016 11:51:21 GMT", "version": "v1" }, { "created": "Tue, 14 Nov 2017 17:35:51 GMT", "version": "v2" } ]
2017-11-15
[ [ "Wen", "Haiguang", "" ], [ "Shi", "Junxing", "" ], [ "Zhang", "Yizhen", "" ], [ "Lu", "Kun-Han", "" ], [ "Cao", "Jiayue", "" ], [ "Liu", "Zhongming", "" ] ]
Convolutional neural network (CNN) driven by image recognition has been shown to be able to explain cortical responses to static pictures at ventral-stream areas. Here, we further showed that such CNN could reliably predict and decode functional magnetic resonance imaging data from humans watching natural movies, despite its lack of any mechanism to account for temporal dynamics or feedback processing. Using separate data, encoding and decoding models were developed and evaluated for describing the bi-directional relationships be-tween the CNN and the brain. Through the encoding models, the CNN-predicted areas covered not only the ventral stream, but also the dorsal stream, albe-it to a lesser degree; single-voxel response was visualized as the specific pixel pattern that drove the response, revealing the distinct representation of individual cortical location; cortical activation was synthesized from natural images with high-throughput to map category representation, con-trast, and selectivity. Through the decoding models, fMRI signals were directly decoded to estimate the feature representations in both visual and semantic spaces, for direct visual reconstruction and seman-tic categorization, respectively. These results cor-roborate, generalize, and extend previous findings, and highlight the value of using deep learning, as an all-in-one model of the visual cortex, to understand and decode natural vision.
2401.13023
Antje Keppler
Peter Bajcsy, Sreenivas Bhattiprolu, Katy Boerner, Beth A Cimini, Lucy Collinson, Jan Ellenberg, Reto Fiolka, Maryellen Giger, Wojtek Goscinski, Matthew Hartley, Nathan Hotaling, Rick Horwitz, Florian Jug, Anna Kreshuk, Emma Lundberg, Aastha Mathur, Kedar Narayan, Shuichi Onami, Anne L. Plant, Fred Prior, Jason Swedlow, Adam Taylor, and Antje Keppler
Enabling Global Image Data Sharing in the Life Sciences
This manuscript (arXiv:2401.13023) is published with a closely related companion entitled, Harmonizing the Generation and Pre-publication Stewardship of FAIR Image Data, which can be found at the following link, arXiv:2401.13022
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by-nc-sa/4.0/
Coordinated collaboration is essential to realize the added value of and infrastructure requirements for global image data sharing in the life sciences. In this White Paper, we take a first step at presenting some of the most common use cases as well as critical/emerging use cases of (including the use of artificial intelligence for) biological and medical image data, which would benefit tremendously from better frameworks for sharing (including technical, resourcing, legal, and ethical aspects). In the second half of this paper, we paint an ideal world scenario for how global image data sharing could work and benefit all life sciences and beyond. As this is still a long way off, we conclude by suggesting several concrete measures directed toward our institutions, existing imaging communities and data initiatives, and national funders, as well as publishers. Our vision is that within the next ten years, most researchers in the world will be able to make their datasets openly available and use quality image data of interest to them for their research and benefit. This paper is published in parallel with a companion White Paper entitled Harmonizing the Generation and Pre-publication Stewardship of FAIR Image Data, which addresses challenges and opportunities related to producing well-documented and high-quality image data that is ready to be shared. The driving goal is to address remaining challenges and democratize access to everyday practices and tools for a spectrum of biomedical researchers, regardless of their expertise, access to resources, and geographical location.
[ { "created": "Tue, 23 Jan 2024 18:47:52 GMT", "version": "v1" }, { "created": "Wed, 31 Jan 2024 15:55:19 GMT", "version": "v2" }, { "created": "Fri, 2 Feb 2024 09:45:11 GMT", "version": "v3" }, { "created": "Fri, 9 Aug 2024 06:09:57 GMT", "version": "v4" } ]
2024-08-12
[ [ "Bajcsy", "Peter", "" ], [ "Bhattiprolu", "Sreenivas", "" ], [ "Boerner", "Katy", "" ], [ "Cimini", "Beth A", "" ], [ "Collinson", "Lucy", "" ], [ "Ellenberg", "Jan", "" ], [ "Fiolka", "Reto", "" ], [ ...
Coordinated collaboration is essential to realize the added value of and infrastructure requirements for global image data sharing in the life sciences. In this White Paper, we take a first step at presenting some of the most common use cases as well as critical/emerging use cases of (including the use of artificial intelligence for) biological and medical image data, which would benefit tremendously from better frameworks for sharing (including technical, resourcing, legal, and ethical aspects). In the second half of this paper, we paint an ideal world scenario for how global image data sharing could work and benefit all life sciences and beyond. As this is still a long way off, we conclude by suggesting several concrete measures directed toward our institutions, existing imaging communities and data initiatives, and national funders, as well as publishers. Our vision is that within the next ten years, most researchers in the world will be able to make their datasets openly available and use quality image data of interest to them for their research and benefit. This paper is published in parallel with a companion White Paper entitled Harmonizing the Generation and Pre-publication Stewardship of FAIR Image Data, which addresses challenges and opportunities related to producing well-documented and high-quality image data that is ready to be shared. The driving goal is to address remaining challenges and democratize access to everyday practices and tools for a spectrum of biomedical researchers, regardless of their expertise, access to resources, and geographical location.
2109.04105
Anna Paola Muntoni
Anna Paola Muntoni, Andrea Pagnani, Martin Weigt and Francesco Zamponi
adabmDCA: Adaptive Boltzmann machine learning for biological sequences
null
BMC Bioinformatics 22, 528 (2021)
10.1186/s12859-021-04441-9
null
q-bio.QM cond-mat.dis-nn q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Boltzmann machines are energy-based models that have been shown to provide an accurate statistical description of domains of evolutionary-related protein and RNA families. They are parametrized in terms of local biases accounting for residue conservation, and pairwise terms to model epistatic coevolution between residues. From the model parameters, it is possible to extract an accurate prediction of the three-dimensional contact map of the target domain. More recently, the accuracy of these models has been also assessed in terms of their ability in predicting mutational effects and generating in silico functional sequences. Our adaptive implementation of Boltzmann machine learning, adabmDCA, can be generally applied to both protein and RNA families and accomplishes several learning set-ups, depending on the complexity of the input data and on the user requirements. The code is fully available at https://github.com/anna-pa-m/adabmDCA. As an example, we have performed the learning of three Boltzmann machines modeling the Kunitz and Beta-lactamase2 protein domains and TPP-riboswitch RNA domain. The models learned by adabmDCA are comparable to those obtained by state-of-the-art techniques for this task, in terms of the quality of the inferred contact map as well as of the synthetically generated sequences. In addition, the code implements both equilibrium and out-of-equilibrium learning, which allows for an accurate and lossless training when the equilibrium one is prohibitive in terms of computational time, and allows for pruning irrelevant parameters using an information-based criterion.
[ { "created": "Thu, 9 Sep 2021 08:58:25 GMT", "version": "v1" }, { "created": "Tue, 2 Nov 2021 08:24:05 GMT", "version": "v2" } ]
2021-11-03
[ [ "Muntoni", "Anna Paola", "" ], [ "Pagnani", "Andrea", "" ], [ "Weigt", "Martin", "" ], [ "Zamponi", "Francesco", "" ] ]
Boltzmann machines are energy-based models that have been shown to provide an accurate statistical description of domains of evolutionary-related protein and RNA families. They are parametrized in terms of local biases accounting for residue conservation, and pairwise terms to model epistatic coevolution between residues. From the model parameters, it is possible to extract an accurate prediction of the three-dimensional contact map of the target domain. More recently, the accuracy of these models has been also assessed in terms of their ability in predicting mutational effects and generating in silico functional sequences. Our adaptive implementation of Boltzmann machine learning, adabmDCA, can be generally applied to both protein and RNA families and accomplishes several learning set-ups, depending on the complexity of the input data and on the user requirements. The code is fully available at https://github.com/anna-pa-m/adabmDCA. As an example, we have performed the learning of three Boltzmann machines modeling the Kunitz and Beta-lactamase2 protein domains and TPP-riboswitch RNA domain. The models learned by adabmDCA are comparable to those obtained by state-of-the-art techniques for this task, in terms of the quality of the inferred contact map as well as of the synthetically generated sequences. In addition, the code implements both equilibrium and out-of-equilibrium learning, which allows for an accurate and lossless training when the equilibrium one is prohibitive in terms of computational time, and allows for pruning irrelevant parameters using an information-based criterion.
q-bio/0507036
Tom Michoel
Tom Michoel, Yves Van de Peer
A helicoidal transfer matrix model for inhomogeneous DNA melting
v3: Matlab toolbox included with source file; article unchanged, 12 pages, 11 figures, RevTeX
Phys. Rev. E 73, 011908 (2006)
10.1103/PhysRevE.73.011908
null
q-bio.BM q-bio.QM
null
An inhomogeneous helicoidal nearest-neighbor model with continuous degrees of freedom is shown to predict the same DNA melting properties as traditional long-range Ising models, for free DNA molecules in solution, as well as superhelically stressed DNA with a fixed linking number constraint. Without loss of accuracy, the continuous degrees of freedom can be discretized using a minimal number of discretization points, yielding an effective transfer matrix model of modest dimension (d=36). The resulting algorithms to compute DNA melting profiles are both simple and efficient.
[ { "created": "Mon, 25 Jul 2005 14:14:14 GMT", "version": "v1" }, { "created": "Tue, 8 Nov 2005 10:06:10 GMT", "version": "v2" }, { "created": "Fri, 13 Oct 2006 08:05:41 GMT", "version": "v3" } ]
2007-05-23
[ [ "Michoel", "Tom", "" ], [ "Van de Peer", "Yves", "" ] ]
An inhomogeneous helicoidal nearest-neighbor model with continuous degrees of freedom is shown to predict the same DNA melting properties as traditional long-range Ising models, for free DNA molecules in solution, as well as superhelically stressed DNA with a fixed linking number constraint. Without loss of accuracy, the continuous degrees of freedom can be discretized using a minimal number of discretization points, yielding an effective transfer matrix model of modest dimension (d=36). The resulting algorithms to compute DNA melting profiles are both simple and efficient.
2306.06407
Jacek Mi\c{e}kisz
Javad Mohamadichamgavi and Jacek Mi\c{e}kisz
Effect of the degree of an initial mutant in Moran processes in structured populations
10 pages, 11 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
We study the effect of the mutant's degree on the fixation probability, extinction and fixation times, in the Moran process on Erd\"{o}s-R\'{e}nyi and Barab\'{a}si-Albert graphs. We performed stochastic simulations and use mean-field type approximations to obtain analytical formulas. We showed that the initial placement of a mutant has a significant impact on the fixation probability and extinction time, while it has no effect on the fixation time. In both types of graphs, an increase in the degree of a initial mutant results in a decreased fixation probability and a shorter time to extinction.
[ { "created": "Sat, 10 Jun 2023 10:42:13 GMT", "version": "v1" } ]
2023-06-13
[ [ "Mohamadichamgavi", "Javad", "" ], [ "Miȩkisz", "Jacek", "" ] ]
We study the effect of the mutant's degree on the fixation probability, extinction and fixation times, in the Moran process on Erd\"{o}s-R\'{e}nyi and Barab\'{a}si-Albert graphs. We performed stochastic simulations and use mean-field type approximations to obtain analytical formulas. We showed that the initial placement of a mutant has a significant impact on the fixation probability and extinction time, while it has no effect on the fixation time. In both types of graphs, an increase in the degree of a initial mutant results in a decreased fixation probability and a shorter time to extinction.
2005.05606
Gianni De Fabritiis
Alejandro Varela-Rial, Maciej Majewski, Alberto Cuzzolin, Gerard Mart\'inez-Rosell and Gianni De Fabritiis
SkeleDock: A Web Application for Scaffold Docking in PlayMolecule
null
null
null
null
q-bio.QM q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
SkeleDock is a scaffold docking algorithm which uses the structure of a protein-ligand complex as a template to model the binding mode of a chemically similar system. This algorithm was evaluated in the D3R Grand Challenge 4 pose prediction challenge, where it achieved competitive performance. Furthermore, we show that, if crystallized fragments of the target ligand are available, SkeleDock can outperform rDock docking software at predicting the binding mode. This article also addresses the capacity of this algorithm to model macrocycles and deal with scaffold hopping. SkeleDock can be accessed at https://playmolecule.org/SkeleDock/.
[ { "created": "Tue, 12 May 2020 08:24:52 GMT", "version": "v1" } ]
2020-05-13
[ [ "Varela-Rial", "Alejandro", "" ], [ "Majewski", "Maciej", "" ], [ "Cuzzolin", "Alberto", "" ], [ "Martínez-Rosell", "Gerard", "" ], [ "De Fabritiis", "Gianni", "" ] ]
SkeleDock is a scaffold docking algorithm which uses the structure of a protein-ligand complex as a template to model the binding mode of a chemically similar system. This algorithm was evaluated in the D3R Grand Challenge 4 pose prediction challenge, where it achieved competitive performance. Furthermore, we show that, if crystallized fragments of the target ligand are available, SkeleDock can outperform rDock docking software at predicting the binding mode. This article also addresses the capacity of this algorithm to model macrocycles and deal with scaffold hopping. SkeleDock can be accessed at https://playmolecule.org/SkeleDock/.
2209.13014
Yang Zhang
Yang Zhang, Gengmo Zhou, Zhewei Wei, Hongteng Xu
Predicting Protein-Ligand Binding Affinity via Joint Global-Local Interaction Modeling
null
null
null
null
q-bio.BM cs.AI cs.LG q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The prediction of protein-ligand binding affinity is of great significance for discovering lead compounds in drug research. Facing this challenging task, most existing prediction methods rely on the topological and/or spatial structure of molecules and the local interactions while ignoring the multi-level inter-molecular interactions between proteins and ligands, which often lead to sub-optimal performance. To solve this issue, we propose a novel global-local interaction (GLI) framework to predict protein-ligand binding affinity. In particular, our GLI framework considers the inter-molecular interactions between proteins and ligands, which involve not only the high-energy short-range interactions between closed atoms but also the low-energy long-range interactions between non-bonded atoms. For each pair of protein and ligand, our GLI embeds the long-range interactions globally and aggregates local short-range interactions, respectively. Such a joint global-local interaction modeling strategy helps to improve prediction accuracy, and the whole framework is compatible with various neural network-based modules. Experiments demonstrate that our GLI framework outperforms state-of-the-art methods with simple neural network architectures and moderate computational costs.
[ { "created": "Sun, 18 Sep 2022 10:17:05 GMT", "version": "v1" } ]
2022-09-28
[ [ "Zhang", "Yang", "" ], [ "Zhou", "Gengmo", "" ], [ "Wei", "Zhewei", "" ], [ "Xu", "Hongteng", "" ] ]
The prediction of protein-ligand binding affinity is of great significance for discovering lead compounds in drug research. Facing this challenging task, most existing prediction methods rely on the topological and/or spatial structure of molecules and the local interactions while ignoring the multi-level inter-molecular interactions between proteins and ligands, which often lead to sub-optimal performance. To solve this issue, we propose a novel global-local interaction (GLI) framework to predict protein-ligand binding affinity. In particular, our GLI framework considers the inter-molecular interactions between proteins and ligands, which involve not only the high-energy short-range interactions between closed atoms but also the low-energy long-range interactions between non-bonded atoms. For each pair of protein and ligand, our GLI embeds the long-range interactions globally and aggregates local short-range interactions, respectively. Such a joint global-local interaction modeling strategy helps to improve prediction accuracy, and the whole framework is compatible with various neural network-based modules. Experiments demonstrate that our GLI framework outperforms state-of-the-art methods with simple neural network architectures and moderate computational costs.
q-bio/0506017
Bruce Hoeneisen
B. Hoeneisen and G. Trueba
Built to evolve
10 pages, 4 figures
null
null
null
q-bio.PE
null
We study the probabilities of evolution based on random mutations and natural selection. We conclude that evolution to multicellular eukaryots, or even prokaryots, is unlikely to be the result of only random mutations. Complex organisms have evolved through several mechanisms besides random mutations, namely DNA recombination, adaptive mutations, and acquisition of foreign DNA. We conclude that all living organisms, in addition to being self-organizing and reproducing (autopoyetic), have built-in mechanisms of evolution, some of which respond in very specific ways to environmental stress.
[ { "created": "Wed, 15 Jun 2005 19:47:26 GMT", "version": "v1" } ]
2007-05-23
[ [ "Hoeneisen", "B.", "" ], [ "Trueba", "G.", "" ] ]
We study the probabilities of evolution based on random mutations and natural selection. We conclude that evolution to multicellular eukaryots, or even prokaryots, is unlikely to be the result of only random mutations. Complex organisms have evolved through several mechanisms besides random mutations, namely DNA recombination, adaptive mutations, and acquisition of foreign DNA. We conclude that all living organisms, in addition to being self-organizing and reproducing (autopoyetic), have built-in mechanisms of evolution, some of which respond in very specific ways to environmental stress.
2301.05321
Philip Greulich
Cristina Parigini, Philip Greulich
Homeostatic regulation of renewing tissue cell populations via crowding control
null
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
To maintain renewing epithelial tissues in a healthy, homeostatic state, (stem) cell divisions and differentiation need to be tightly regulated. Mechanisms of homeostatic control often rely on crowding control: cells are able to sense the cell density in their environment (via various molecular and mechanosensing pathways) and respond by adjusting division, differentiation, and cell state transitions appropriately. Here we determine, via a mathematically rigorous framework, which general conditions for the crowding feedback regulation (i) must be minimally met, and (ii) are sufficient, to allow the maintenance of homeostasis in renewing tissues. We show that those conditions naturally allow for a degree of robustness toward disruption of regulation. Furthermore, intrinsic to this feedback regulation is that stem cell identity is established collectively by the cell population, not by individual cells, which implies the possibility of `quasi-dedifferentiation', in which cells committed to differentiation may reacquire stem cell properties upon depletion of the stem cell pool. These findings can guide future experimental campaigns to identify specific crowding feedback mechanisms.
[ { "created": "Thu, 12 Jan 2023 22:26:24 GMT", "version": "v1" } ]
2023-01-16
[ [ "Parigini", "Cristina", "" ], [ "Greulich", "Philip", "" ] ]
To maintain renewing epithelial tissues in a healthy, homeostatic state, (stem) cell divisions and differentiation need to be tightly regulated. Mechanisms of homeostatic control often rely on crowding control: cells are able to sense the cell density in their environment (via various molecular and mechanosensing pathways) and respond by adjusting division, differentiation, and cell state transitions appropriately. Here we determine, via a mathematically rigorous framework, which general conditions for the crowding feedback regulation (i) must be minimally met, and (ii) are sufficient, to allow the maintenance of homeostasis in renewing tissues. We show that those conditions naturally allow for a degree of robustness toward disruption of regulation. Furthermore, intrinsic to this feedback regulation is that stem cell identity is established collectively by the cell population, not by individual cells, which implies the possibility of `quasi-dedifferentiation', in which cells committed to differentiation may reacquire stem cell properties upon depletion of the stem cell pool. These findings can guide future experimental campaigns to identify specific crowding feedback mechanisms.
1202.2491
Henry Tuckwell
Henry C. Tuckwell, J\"urgen Jost
Analysis of inverse stochastic resonance and the long-term firing of Hodgkin-Huxley neurons with Gaussian white noise
27 pages, 16 figures
null
10.1016/j.physa.2012.06.019
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In previous articles we have investigated the firing properties of the standard Hodgkin-Huxley (HH) systems of ordinary and partial differential equations in response to input currents composed of a drift (mean) and additive Gaussian white noise. For certain values of the mean current, as the noise amplitude increased from zero, the firing rate exhibited a minimum and this phenomenon was called inverse stochastic resonance (ISR). Here we analyse the underlying transitions from a stable equilibrium point to the limit cycle and vice-versa. Focusing on the case of a mean input current density $\mu=6.8$ at which repetitive firing occurs and ISR had been found to be pronounced, some of the properties of the corresponding stable equilibrium point are found. A linearized approximation around this point has oscillatory solutions from whose maxima spikes tend to occur. A one dimensional diffusion is also constructed for small noise based on the correlations between the pairs of HH variables and the small magnitudes of the fluctuations in two of them. Properties of the basin of attraction of the limit cycle (spike) are investigated heuristically and also the nature of distribution of spikes at very small noise corresponding to trajectories which do not ever enter the basin of attraction of the equilibrium point. Long term trials of duration 500000 ms are carried out for values of the noise parameter $\sigma$ from 0 to 2.0, with results appearing in Section 3. The graph of mean spike count versus $\sigma$ is divided into 4 regions $R_1,...,R_4,$ where $R_3$ contains the minimum associated with ISR.
[ { "created": "Sun, 12 Feb 2012 06:22:12 GMT", "version": "v1" } ]
2015-06-04
[ [ "Tuckwell", "Henry C.", "" ], [ "Jost", "Jürgen", "" ] ]
In previous articles we have investigated the firing properties of the standard Hodgkin-Huxley (HH) systems of ordinary and partial differential equations in response to input currents composed of a drift (mean) and additive Gaussian white noise. For certain values of the mean current, as the noise amplitude increased from zero, the firing rate exhibited a minimum and this phenomenon was called inverse stochastic resonance (ISR). Here we analyse the underlying transitions from a stable equilibrium point to the limit cycle and vice-versa. Focusing on the case of a mean input current density $\mu=6.8$ at which repetitive firing occurs and ISR had been found to be pronounced, some of the properties of the corresponding stable equilibrium point are found. A linearized approximation around this point has oscillatory solutions from whose maxima spikes tend to occur. A one dimensional diffusion is also constructed for small noise based on the correlations between the pairs of HH variables and the small magnitudes of the fluctuations in two of them. Properties of the basin of attraction of the limit cycle (spike) are investigated heuristically and also the nature of distribution of spikes at very small noise corresponding to trajectories which do not ever enter the basin of attraction of the equilibrium point. Long term trials of duration 500000 ms are carried out for values of the noise parameter $\sigma$ from 0 to 2.0, with results appearing in Section 3. The graph of mean spike count versus $\sigma$ is divided into 4 regions $R_1,...,R_4,$ where $R_3$ contains the minimum associated with ISR.
1510.00386
Kamran Kaveh
Venkata. S. K. Manem, Kamran Kaveh, Mohammad Kohandel, Siv Sivaloganathan
Modelling Invasion Dynamics with Spatial Random-Fitness due to Microenvironment
23 pages, 11 figures. PLoS One (2015)
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Numerous experimental studies have demonstrated that the microenvironment is a key regulator influencing the proliferative and migrative potentials of species. Spatial and temporal disturbances lead to adverse and hazardous microenvironments for cellular systems that is reflected in the phenotypic heterogeneity within the system. In this paper, we study the effect of microenvironment on the invasive capability of species, or mutants, on structured grids under the influence of site-dependent random proliferation in addition to a migration potential. We discuss both continuous and discrete fitness distributions. Our results suggest that the invasion probability is negatively correlated with the variance of fitness distribution of mutants (for both advantageous and neutral mutants) in the absence of migration of both types of cells. A similar behaviour is observed even in the presence of a random fitness distribution of host cells in the system with neutral fitness rate. In the case of a bimodal distribution, we observe zero invasion probability until the system reaches a (specific) proportion of advantageous phenotypes. Also, we find that the migrative potential amplifies the invasion probability as the variance of fitness of mutants increases in the system, which is the exact opposite in the absence of migration. Our computational framework captures the harsh microenvironmental conditions through quenched random fitness distributions and migration of cells, and our analysis shows that they play an important role in the invasion dynamics of several biological systems such as bacterial micro-habitats, epithelial dysplasia, and metastasis. We believe that our results may lead to more experimental studies, which can in turn provide further insights into the role and impact of heterogeneous environments on invasion dynamics.
[ { "created": "Thu, 24 Sep 2015 23:49:53 GMT", "version": "v1" }, { "created": "Mon, 5 Oct 2015 22:23:54 GMT", "version": "v2" } ]
2015-10-07
[ [ "Manem", "Venkata. S. K.", "" ], [ "Kaveh", "Kamran", "" ], [ "Kohandel", "Mohammad", "" ], [ "Sivaloganathan", "Siv", "" ] ]
Numerous experimental studies have demonstrated that the microenvironment is a key regulator influencing the proliferative and migrative potentials of species. Spatial and temporal disturbances lead to adverse and hazardous microenvironments for cellular systems that is reflected in the phenotypic heterogeneity within the system. In this paper, we study the effect of microenvironment on the invasive capability of species, or mutants, on structured grids under the influence of site-dependent random proliferation in addition to a migration potential. We discuss both continuous and discrete fitness distributions. Our results suggest that the invasion probability is negatively correlated with the variance of fitness distribution of mutants (for both advantageous and neutral mutants) in the absence of migration of both types of cells. A similar behaviour is observed even in the presence of a random fitness distribution of host cells in the system with neutral fitness rate. In the case of a bimodal distribution, we observe zero invasion probability until the system reaches a (specific) proportion of advantageous phenotypes. Also, we find that the migrative potential amplifies the invasion probability as the variance of fitness of mutants increases in the system, which is the exact opposite in the absence of migration. Our computational framework captures the harsh microenvironmental conditions through quenched random fitness distributions and migration of cells, and our analysis shows that they play an important role in the invasion dynamics of several biological systems such as bacterial micro-habitats, epithelial dysplasia, and metastasis. We believe that our results may lead to more experimental studies, which can in turn provide further insights into the role and impact of heterogeneous environments on invasion dynamics.
1710.00349
Nathalie Balaban Q
Noga Mosheiff, Bruno M.C. Martins, Sivan Pearl-Mizrahi, Alexander Gruenberger, Stefan Helfrich, Irina Mihalcescu, Dietrich Kohlheyer, James C.W. Locke, Leon Glass and Nathalie Q. Balaban
Correlations of single-cell division times with and without periodic forcing
null
Phys. Rev. X 8, 021035 (2018)
10.1103/PhysRevX.8.021035
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Periodic forcing of nonlinear oscillators leads to a large number of dynamic behaviors. The coupling of the cell-cycle to the circadian clock provides a biological realization of such forcing. Using high throughput single-cell microscopy, we have studied the correlations between cell cycle duration in discrete lineages of several different organisms including those with known coupling to a circadian clock and those without known coupling to a circadian clock. Correlations between cell cycles duration in discrete lineages observed in the organisms with a circadian clock cannot be explained by a simple statistical model but are consistent with predictions of a biologically plausible two dimensional nonlinear map. Surprisingly, the nonlinear map is equivalent to a classic nonlinear map called the fattened Arnold map. The model predicts that circadian coupling may increase cell to cell variability in a clonal population of cells. In agreement with this prediction, deletion of the circadian clock reduces variability. Our results show that simple correlations can identify systems under periodic forcing and that studies of nonlinear coupling of biological oscillators provide insight into basic cellular processes of growth.
[ { "created": "Sun, 1 Oct 2017 13:49:30 GMT", "version": "v1" } ]
2018-05-16
[ [ "Mosheiff", "Noga", "" ], [ "Martins", "Bruno M. C.", "" ], [ "Pearl-Mizrahi", "Sivan", "" ], [ "Gruenberger", "Alexander", "" ], [ "Helfrich", "Stefan", "" ], [ "Mihalcescu", "Irina", "" ], [ "Kohlheyer", "Die...
Periodic forcing of nonlinear oscillators leads to a large number of dynamic behaviors. The coupling of the cell-cycle to the circadian clock provides a biological realization of such forcing. Using high throughput single-cell microscopy, we have studied the correlations between cell cycle duration in discrete lineages of several different organisms including those with known coupling to a circadian clock and those without known coupling to a circadian clock. Correlations between cell cycles duration in discrete lineages observed in the organisms with a circadian clock cannot be explained by a simple statistical model but are consistent with predictions of a biologically plausible two dimensional nonlinear map. Surprisingly, the nonlinear map is equivalent to a classic nonlinear map called the fattened Arnold map. The model predicts that circadian coupling may increase cell to cell variability in a clonal population of cells. In agreement with this prediction, deletion of the circadian clock reduces variability. Our results show that simple correlations can identify systems under periodic forcing and that studies of nonlinear coupling of biological oscillators provide insight into basic cellular processes of growth.
2310.19614
Julian Rossbroich
Julian Rossbroich, Friedemann Zenke
Dis-inhibitory neuronal circuits can control the sign of synaptic plasticity
Accepted at NeurIPS 2023; fixed error in Figure S2
null
null
null
q-bio.NC cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
How neuronal circuits achieve credit assignment remains a central unsolved question in systems neuroscience. Various studies have suggested plausible solutions for back-propagating error signals through multi-layer networks. These purely functionally motivated models assume distinct neuronal compartments to represent local error signals that determine the sign of synaptic plasticity. However, this explicit error modulation is inconsistent with phenomenological plasticity models in which the sign depends primarily on postsynaptic activity. Here we show how a plausible microcircuit model and Hebbian learning rule derived within an adaptive control theory framework can resolve this discrepancy. Assuming errors are encoded in top-down dis-inhibitory synaptic afferents, we show that error-modulated learning emerges naturally at the circuit level when recurrent inhibition explicitly influences Hebbian plasticity. The same learning rule accounts for experimentally observed plasticity in the absence of inhibition and performs comparably to back-propagation of error (BP) on several non-linearly separable benchmarks. Our findings bridge the gap between functional and experimentally observed plasticity rules and make concrete predictions on inhibitory modulation of excitatory plasticity.
[ { "created": "Mon, 30 Oct 2023 15:06:19 GMT", "version": "v1" }, { "created": "Mon, 11 Dec 2023 18:04:14 GMT", "version": "v2" } ]
2023-12-12
[ [ "Rossbroich", "Julian", "" ], [ "Zenke", "Friedemann", "" ] ]
How neuronal circuits achieve credit assignment remains a central unsolved question in systems neuroscience. Various studies have suggested plausible solutions for back-propagating error signals through multi-layer networks. These purely functionally motivated models assume distinct neuronal compartments to represent local error signals that determine the sign of synaptic plasticity. However, this explicit error modulation is inconsistent with phenomenological plasticity models in which the sign depends primarily on postsynaptic activity. Here we show how a plausible microcircuit model and Hebbian learning rule derived within an adaptive control theory framework can resolve this discrepancy. Assuming errors are encoded in top-down dis-inhibitory synaptic afferents, we show that error-modulated learning emerges naturally at the circuit level when recurrent inhibition explicitly influences Hebbian plasticity. The same learning rule accounts for experimentally observed plasticity in the absence of inhibition and performs comparably to back-propagation of error (BP) on several non-linearly separable benchmarks. Our findings bridge the gap between functional and experimentally observed plasticity rules and make concrete predictions on inhibitory modulation of excitatory plasticity.
1410.6153
Benjamin Ivorra Prof.
Benjamin Ivorra, Di\`ene Ngom, \'Angel Manuel Ramos
Be-CoDiS: A mathematical model to predict the risk of human diseases spread between countries. Validation and application to the 2014-15 Ebola Virus Disease epidemic
34 pages; Version 5; Work in Progress
null
null
null
q-bio.PE cs.CE math.DS physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ebola virus disease is a lethal human and primate disease that currently requires a particular attention from the international health authorities due to important outbreaks in some Western African countries and isolated cases in the United Kingdom, the USA and Spain. Regarding the emergency of this situation, there is a need of development of decision tools, such as mathematical models, to assist the authorities to focus their efforts in important factors to eradicate Ebola. In this work, we propose a novel deterministic spatial-temporal model, called Be-CoDiS (Between-Countries Disease Spread), to study the evolution of human diseases within and between countries. The main interesting characteristics of Be-CoDiS are the consideration of the movement of people between countries, the control measure effects and the use of time dependent coefficients adapted to each country. First, we focus on the mathematical formulation of each component of the model and explain how its parameters and inputs are obtained. Then, in order to validate our approach, we consider two numerical experiments regarding the 2014-15 Ebola epidemic. The first one studies the ability of the model in predicting the EVD evolution between countries starting from the index cases in Guinea in December 2013. The second one consists of forecasting the evolution of the epidemic by using some recent data. The results obtained with Be-CoDiS are compared to real data and other models outputs found in the literature. Finally, a brief parameter sensitivity analysis is done. A free Matlab version of Be-CoDiS is available at: http://www.mat.ucm.es/momat/software.htm
[ { "created": "Wed, 22 Oct 2014 19:52:56 GMT", "version": "v1" }, { "created": "Sat, 8 Nov 2014 22:06:05 GMT", "version": "v2" }, { "created": "Thu, 20 Nov 2014 12:58:15 GMT", "version": "v3" }, { "created": "Mon, 15 Dec 2014 16:33:56 GMT", "version": "v4" }, { "cr...
2015-05-13
[ [ "Ivorra", "Benjamin", "" ], [ "Ngom", "Diène", "" ], [ "Ramos", "Ángel Manuel", "" ] ]
Ebola virus disease is a lethal human and primate disease that currently requires a particular attention from the international health authorities due to important outbreaks in some Western African countries and isolated cases in the United Kingdom, the USA and Spain. Regarding the emergency of this situation, there is a need of development of decision tools, such as mathematical models, to assist the authorities to focus their efforts in important factors to eradicate Ebola. In this work, we propose a novel deterministic spatial-temporal model, called Be-CoDiS (Between-Countries Disease Spread), to study the evolution of human diseases within and between countries. The main interesting characteristics of Be-CoDiS are the consideration of the movement of people between countries, the control measure effects and the use of time dependent coefficients adapted to each country. First, we focus on the mathematical formulation of each component of the model and explain how its parameters and inputs are obtained. Then, in order to validate our approach, we consider two numerical experiments regarding the 2014-15 Ebola epidemic. The first one studies the ability of the model in predicting the EVD evolution between countries starting from the index cases in Guinea in December 2013. The second one consists of forecasting the evolution of the epidemic by using some recent data. The results obtained with Be-CoDiS are compared to real data and other models outputs found in the literature. Finally, a brief parameter sensitivity analysis is done. A free Matlab version of Be-CoDiS is available at: http://www.mat.ucm.es/momat/software.htm
1806.10764
John Baez
John C. Baez, Blake S. Pollard, Jonathan Lorand and Maru Sarazola
Biochemical Coupling Through Emergent Conservation Laws
13 pages
null
null
null
q-bio.MN physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bazhin has analyzed ATP coupling in terms of quasiequilibrium states where fast reactions have reached an approximate steady state while slow reactions have not yet reached equilibrium. After an expository introduction to the relevant aspects of reaction network theory, we review his work and explain the role of emergent conserved quantities in coupling. These are quantities, left unchanged by fast reactions, whose conservation forces exergonic processes such as ATP hydrolysis to drive desired endergonic processes.
[ { "created": "Thu, 28 Jun 2018 04:16:03 GMT", "version": "v1" } ]
2018-06-29
[ [ "Baez", "John C.", "" ], [ "Pollard", "Blake S.", "" ], [ "Lorand", "Jonathan", "" ], [ "Sarazola", "Maru", "" ] ]
Bazhin has analyzed ATP coupling in terms of quasiequilibrium states where fast reactions have reached an approximate steady state while slow reactions have not yet reached equilibrium. After an expository introduction to the relevant aspects of reaction network theory, we review his work and explain the role of emergent conserved quantities in coupling. These are quantities, left unchanged by fast reactions, whose conservation forces exergonic processes such as ATP hydrolysis to drive desired endergonic processes.
1006.4911
Fabio Pichierri
Fabio Pichierri
A quantum mechanical analysis of the light-harvesting complex 2 from purple photosynthetic bacteria. Insights into the electrostatic effects of transmembrane helices
14 pages, 7 figures
BioSystems 103 (2011) 132-137
10.1016/j.biosystems.2010.08.006
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We perform a quantum mechanical study of the peptides that are part of the LH2 complex from Rhodopseudomonas acidophila, a non-sulfur purple bacteria that has the ability of producing chemical energy from photosynthesis. The electronic structure calculations indicate that the transmembrane helices of these peptides are characterized by dipole moments with a magnitude of ~150 D. When the full nonamer assembly made of eighteen peptides is considered, then a macrodipole of magnitude 704 D is built up from the vector sum of each monomer dipole. The macrodipole is oriented normal to the membrane plane and with the positive tip toward the cytoplasm thereby indicating that the electronic charge of the protein scaffold is polarized toward the periplasm. The results obtained here suggest that the asymmetric charge distribution of the protein scaffold contributes an anisotropic electrostatic environment which differentiates the absorption properties of the bacteriochlorophyll pigments, B800 and B850, embedded in the LH2 complex.
[ { "created": "Fri, 25 Jun 2010 05:24:08 GMT", "version": "v1" } ]
2011-02-03
[ [ "Pichierri", "Fabio", "" ] ]
We perform a quantum mechanical study of the peptides that are part of the LH2 complex from Rhodopseudomonas acidophila, a non-sulfur purple bacteria that has the ability of producing chemical energy from photosynthesis. The electronic structure calculations indicate that the transmembrane helices of these peptides are characterized by dipole moments with a magnitude of ~150 D. When the full nonamer assembly made of eighteen peptides is considered, then a macrodipole of magnitude 704 D is built up from the vector sum of each monomer dipole. The macrodipole is oriented normal to the membrane plane and with the positive tip toward the cytoplasm thereby indicating that the electronic charge of the protein scaffold is polarized toward the periplasm. The results obtained here suggest that the asymmetric charge distribution of the protein scaffold contributes an anisotropic electrostatic environment which differentiates the absorption properties of the bacteriochlorophyll pigments, B800 and B850, embedded in the LH2 complex.
1206.5092
Andrea De Martino
Daniele De Martino, Matteo Figliuzzi, Andrea De Martino, Enzo Marinari
A scalable algorithm to explore the Gibbs energy landscape of genome-scale metabolic networks
11 pages, 6 figures, 1 table; for associated supporting material see http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1002562
PLoS Comput Biol 8(6): e1002562 (2012)
10.1371/journal.pcbi.1002562
null
q-bio.MN cond-mat.dis-nn physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The integration of various types of genomic data into predictive models of biological networks is one of the main challenges currently faced by computational biology. Constraint-based models in particular play a key role in the attempt to obtain a quantitative understanding of cellular metabolism at genome scale. In essence, their goal is to frame the metabolic capabilities of an organism based on minimal assumptions that describe the steady states of the underlying reaction network via suitable stoichiometric constraints, specifically mass balance and energy balance (i.e. thermodynamic feasibility). The implementation of these requirements to generate viable configurations of reaction fluxes and/or to test given flux profiles for thermodynamic feasibility can however prove to be computationally intensive. We propose here a fast and scalable stoichiometry-based method to explore the Gibbs energy landscape of a biochemical network at steady state. The method is applied to the problem of reconstructing the Gibbs energy landscape underlying metabolic activity in the human red blood cell, and to that of identifying and removing thermodynamically infeasible reaction cycles in the Escherichia coli metabolic network (iAF1260). In the former case, we produce consistent predictions for chemical potentials (or log-concentrations) of intracellular metabolites; in the latter, we identify a restricted set of loops (23 in total) in the periplasmic and cytoplasmic core as the origin of thermodynamic infeasibility in a large sample ($10^6$) of flux configurations generated randomly and compatibly with the prior information available on reaction reversibility.
[ { "created": "Fri, 22 Jun 2012 09:47:44 GMT", "version": "v1" } ]
2012-06-25
[ [ "De Martino", "Daniele", "" ], [ "Figliuzzi", "Matteo", "" ], [ "De Martino", "Andrea", "" ], [ "Marinari", "Enzo", "" ] ]
The integration of various types of genomic data into predictive models of biological networks is one of the main challenges currently faced by computational biology. Constraint-based models in particular play a key role in the attempt to obtain a quantitative understanding of cellular metabolism at genome scale. In essence, their goal is to frame the metabolic capabilities of an organism based on minimal assumptions that describe the steady states of the underlying reaction network via suitable stoichiometric constraints, specifically mass balance and energy balance (i.e. thermodynamic feasibility). The implementation of these requirements to generate viable configurations of reaction fluxes and/or to test given flux profiles for thermodynamic feasibility can however prove to be computationally intensive. We propose here a fast and scalable stoichiometry-based method to explore the Gibbs energy landscape of a biochemical network at steady state. The method is applied to the problem of reconstructing the Gibbs energy landscape underlying metabolic activity in the human red blood cell, and to that of identifying and removing thermodynamically infeasible reaction cycles in the Escherichia coli metabolic network (iAF1260). In the former case, we produce consistent predictions for chemical potentials (or log-concentrations) of intracellular metabolites; in the latter, we identify a restricted set of loops (23 in total) in the periplasmic and cytoplasmic core as the origin of thermodynamic infeasibility in a large sample ($10^6$) of flux configurations generated randomly and compatibly with the prior information available on reaction reversibility.
2101.02097
Victor Riquelme
Pedro Gajardo and Victor Riquelme
Inmate population models with nonhomogeneous sentence lengths and their effects in an epidemiological model
24 pages, 3 figures, 7 tables
null
null
null
q-bio.PE math.DS q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we develop an inmate population model with a sentencing length structure. The sentence length structure of new inmates represents the problem data and can usually be estimated from the histograms corresponding to the conviction times that are sentenced in a given population. We obtain a transport equation, typically known as the McKendrick equation, the homogenous version of which is included in population models with age structures. Using this equation, we compute the inmate population and entry/exit rates in equilibrium, which are the values to consider in the design of a penitentiary system. With data from the Chilean penitentiary system, we illustrate how to perform these computations. In classifying the inmate population into two groups of sentence lengths (short and long), we incorporate the SIS (susceptible-infected-susceptible) epidemiological model, which considers the entry of infective individuals. We show that a failure to consider the structure of the sentence lengths -- as is common in epidemiological models developed for inmate populations -- for prevalences of new inmates below a certain threshold induces an underestimation of the prevalence in the prison population at steady state. The threshold depends on the basic reproduction number associated with the nonstructured SIS model with no entry of new inmates. We illustrate our findings with analytical and numerical examples for different distributions of sentencing lengths.
[ { "created": "Wed, 6 Jan 2021 15:40:15 GMT", "version": "v1" }, { "created": "Fri, 29 Jan 2021 15:26:01 GMT", "version": "v2" } ]
2021-02-01
[ [ "Gajardo", "Pedro", "" ], [ "Riquelme", "Victor", "" ] ]
In this work, we develop an inmate population model with a sentencing length structure. The sentence length structure of new inmates represents the problem data and can usually be estimated from the histograms corresponding to the conviction times that are sentenced in a given population. We obtain a transport equation, typically known as the McKendrick equation, the homogenous version of which is included in population models with age structures. Using this equation, we compute the inmate population and entry/exit rates in equilibrium, which are the values to consider in the design of a penitentiary system. With data from the Chilean penitentiary system, we illustrate how to perform these computations. In classifying the inmate population into two groups of sentence lengths (short and long), we incorporate the SIS (susceptible-infected-susceptible) epidemiological model, which considers the entry of infective individuals. We show that a failure to consider the structure of the sentence lengths -- as is common in epidemiological models developed for inmate populations -- for prevalences of new inmates below a certain threshold induces an underestimation of the prevalence in the prison population at steady state. The threshold depends on the basic reproduction number associated with the nonstructured SIS model with no entry of new inmates. We illustrate our findings with analytical and numerical examples for different distributions of sentencing lengths.
1810.04069
Ines Abdeljaoued Tej PhD
Ines Abdeljaoued-Tej and Alia BenKahla and Ghassen Haddad and Annick Valibouze
A linear algorithm for computing Polynomial Dynamical System
11 pages, 3 figures
null
null
null
q-bio.MN math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computation biology helps to understand all processes in organisms from interaction of molecules to complex functions of whole organs. Therefore, there is a need for mathematical methods and models that deliver logical explanations in a reasonable time. For the last few years there has been a growing interest in biological theory connected to finite fields: the algebraic modeling tools used up to now are based on Gr\"obner bases or Boolean group. Let $n$ variables representing gene products, changing over the time on $p$ values. A Polynomial dynamical system (PDS) is a function which has several components, each one is a polynom with $n$ variables and coefficient in the finite field $Z/pZ$ that model the evolution of gene products. We propose herein a method using algebraic separators, which are special polynomials abundantly studied in effective Galois theory. This approach avoids heavy calculations and provides a first Polynomial model in linear time.
[ { "created": "Mon, 8 Oct 2018 11:57:28 GMT", "version": "v1" } ]
2018-10-10
[ [ "Abdeljaoued-Tej", "Ines", "" ], [ "BenKahla", "Alia", "" ], [ "Haddad", "Ghassen", "" ], [ "Valibouze", "Annick", "" ] ]
Computation biology helps to understand all processes in organisms from interaction of molecules to complex functions of whole organs. Therefore, there is a need for mathematical methods and models that deliver logical explanations in a reasonable time. For the last few years there has been a growing interest in biological theory connected to finite fields: the algebraic modeling tools used up to now are based on Gr\"obner bases or Boolean group. Let $n$ variables representing gene products, changing over the time on $p$ values. A Polynomial dynamical system (PDS) is a function which has several components, each one is a polynom with $n$ variables and coefficient in the finite field $Z/pZ$ that model the evolution of gene products. We propose herein a method using algebraic separators, which are special polynomials abundantly studied in effective Galois theory. This approach avoids heavy calculations and provides a first Polynomial model in linear time.
2306.06065
Alexander Gower
Alexander H. Gower, Konstantin Korovin, Daniel Brunns{\aa}ker, Ievgeniia A. Tiukova and Ross D. King
LGEM$^\text{+}$: a first-order logic framework for automated improvement of metabolic network models through abduction
15 pages, one figure, two tables, two algorithms
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Scientific discovery in biology is difficult due to the complexity of the systems involved and the expense of obtaining high quality experimental data. Automated techniques are a promising way to make scientific discoveries at the scale and pace required to model large biological systems. A key problem for 21st century biology is to build a computational model of the eukaryotic cell. The yeast Saccharomyces cerevisiae is the best understood eukaryote, and genome-scale metabolic models (GEMs) are rich sources of background knowledge that we can use as a basis for automated inference and investigation. We present LGEM+, a system for automated abductive improvement of GEMs consisting of: a compartmentalised first-order logic framework for describing biochemical pathways (using curated GEMs as the expert knowledge source); and a two-stage hypothesis abduction procedure. We demonstrate that deductive inference on logical theories created using LGEM+, using the automated theorem prover iProver, can predict growth/no-growth of S. cerevisiae strains in minimal media. LGEM+ proposed 2094 unique candidate hypotheses for model improvement. We assess the value of the generated hypotheses using two criteria: (a) genome-wide single-gene essentiality prediction, and (b) constraint of flux-balance analysis (FBA) simulations. For (b) we developed an algorithm to integrate FBA with the logic model. We rank and filter the hypotheses using these assessments. We intend to test these hypotheses using the robot scientist Genesis, which is based around chemostat cultivation and high-throughput metabolomics.
[ { "created": "Fri, 9 Jun 2023 17:39:44 GMT", "version": "v1" } ]
2023-06-12
[ [ "Gower", "Alexander H.", "" ], [ "Korovin", "Konstantin", "" ], [ "Brunnsåker", "Daniel", "" ], [ "Tiukova", "Ievgeniia A.", "" ], [ "King", "Ross D.", "" ] ]
Scientific discovery in biology is difficult due to the complexity of the systems involved and the expense of obtaining high quality experimental data. Automated techniques are a promising way to make scientific discoveries at the scale and pace required to model large biological systems. A key problem for 21st century biology is to build a computational model of the eukaryotic cell. The yeast Saccharomyces cerevisiae is the best understood eukaryote, and genome-scale metabolic models (GEMs) are rich sources of background knowledge that we can use as a basis for automated inference and investigation. We present LGEM+, a system for automated abductive improvement of GEMs consisting of: a compartmentalised first-order logic framework for describing biochemical pathways (using curated GEMs as the expert knowledge source); and a two-stage hypothesis abduction procedure. We demonstrate that deductive inference on logical theories created using LGEM+, using the automated theorem prover iProver, can predict growth/no-growth of S. cerevisiae strains in minimal media. LGEM+ proposed 2094 unique candidate hypotheses for model improvement. We assess the value of the generated hypotheses using two criteria: (a) genome-wide single-gene essentiality prediction, and (b) constraint of flux-balance analysis (FBA) simulations. For (b) we developed an algorithm to integrate FBA with the logic model. We rank and filter the hypotheses using these assessments. We intend to test these hypotheses using the robot scientist Genesis, which is based around chemostat cultivation and high-throughput metabolomics.
1511.08230
Thierry Emonet
Nicholas W Frankel, William Pontius, Yann S Dufour, Junjiajia Long, Luis Hernandez- Nunez, Thierry Emonet
Adaptability of non-genetic diversity in bacterial chemotaxis
Journal link: http://elifesciences.org/content/3/e03526
eLife 3, e03526 (2014)
10.7554/eLife.03526.001
null
q-bio.PE q-bio.CB q-bio.MN
http://creativecommons.org/licenses/by/4.0/
Bacterial chemotaxis systems are as diverse as the environments that bacteria inhabit, but how much environmental variation can cells tolerate with a single system? Diversification of a single chemotaxis system could serve as an alternative, or even evolutionary stepping-stone, to switching between multiple systems. We hypothesized that mutations in gene regulation could lead to heritable control of chemotactic diversity. By simulating foraging and colonization of E. coli using a single-cell chemotaxis model, we found that different environments selected for different behaviors. The resulting trade-offs show that populations facing diverse environments would ideally diversify behaviors when time for navigation is limited. We show that advantageous diversity can arise from changes in the distribution of protein levels among individuals, which could occur through mutations in gene regulation. We propose experiments to test our prediction that chemotactic diversity in a clonal population could be a selectable trait that enables adaptation to environmental variability.
[ { "created": "Wed, 25 Nov 2015 21:19:23 GMT", "version": "v1" } ]
2015-11-30
[ [ "Frankel", "Nicholas W", "" ], [ "Pontius", "William", "" ], [ "Dufour", "Yann S", "" ], [ "Long", "Junjiajia", "" ], [ "Nunez", "Luis Hernandez-", "" ], [ "Emonet", "Thierry", "" ] ]
Bacterial chemotaxis systems are as diverse as the environments that bacteria inhabit, but how much environmental variation can cells tolerate with a single system? Diversification of a single chemotaxis system could serve as an alternative, or even evolutionary stepping-stone, to switching between multiple systems. We hypothesized that mutations in gene regulation could lead to heritable control of chemotactic diversity. By simulating foraging and colonization of E. coli using a single-cell chemotaxis model, we found that different environments selected for different behaviors. The resulting trade-offs show that populations facing diverse environments would ideally diversify behaviors when time for navigation is limited. We show that advantageous diversity can arise from changes in the distribution of protein levels among individuals, which could occur through mutations in gene regulation. We propose experiments to test our prediction that chemotactic diversity in a clonal population could be a selectable trait that enables adaptation to environmental variability.
1606.00632
Kenshi Sakai
Nina Sviridova and Kenshi Sakai
Noise Induced Synchronization on Collective Dynamics of Citrus Production
6 pages, 9 figures
Journal of the Japanese Society of Agricultural Machinery and food Engineers,Volume78(3),221-226,2016
null
null
q-bio.PE nlin.CD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is very common to observe nonlinear features in agricultural and ecological systems. For example, in tree crop production, alternate bearing is well known phenomena caused by nonlinear dynamics. Production of single tree of Citrus Unshiu trees is recognized to be driven by a mechanistic process modeled with the so-called "resource budget model", which demonstrates phenomenon of alternate bearing. However, the term of alternative bearing is used not only for an individual tree's production but also for total production of a large sized population of trees. In this paper, we developed noise induced uncoupled dynamics model for population alternate bearing based on Isagi's resource budget model. Based on numerical experiments with the developed model, theoretical possibility of a substantial alternate bearing effect even in a national market was proposed.
[ { "created": "Thu, 2 Jun 2016 11:34:15 GMT", "version": "v1" } ]
2016-06-03
[ [ "Sviridova", "Nina", "" ], [ "Sakai", "Kenshi", "" ] ]
It is very common to observe nonlinear features in agricultural and ecological systems. For example, in tree crop production, alternate bearing is well known phenomena caused by nonlinear dynamics. Production of single tree of Citrus Unshiu trees is recognized to be driven by a mechanistic process modeled with the so-called "resource budget model", which demonstrates phenomenon of alternate bearing. However, the term of alternative bearing is used not only for an individual tree's production but also for total production of a large sized population of trees. In this paper, we developed noise induced uncoupled dynamics model for population alternate bearing based on Isagi's resource budget model. Based on numerical experiments with the developed model, theoretical possibility of a substantial alternate bearing effect even in a national market was proposed.
2308.11309
Maur\'icio Moreira-Soares
Maur\'icio Moreira-Soares, Eduardo Mossmann, Rui D. M. Travasso and Jos\'e Rafael Bordin
TrajPy: empowering feature engineering for trajectory analysis across domains
4 pages, 1 figure
null
null
null
q-bio.QM physics.bio-ph physics.data-an
http://creativecommons.org/licenses/by/4.0/
Trajectories, sequentially measured quantities that form a path, are an important presence in many different fields, from hadronic beams in physics to electrocardiograms in medicine. Trajectory anal-ysis requires the quantification and classification of curves either using statistical descriptors or physics-based features. To date, there is no extensive and user-friendly package for trajectory anal-ysis available, despite its importance and potential application across domains. We developed a free open-source python package named TrajPy as a complementary tool to empower trajectory analysis. The package showcases a friendly graphic user interface and provides a set of physical descriptors that help characterizing these intricate structures. In combina-tion with image analysis, it was already successfully applied to the study of mitochondrial motility in neuroblastoma cell lines and to the analysis of in silico models for cell migration. The TrajPy package was developed in Python 3 and released under the GNU GPL-3 license. Easy installation is available through PyPi and the development source code can be found in the repository https://github.com/ocbe-uio/TrajPy/. The package release is automatically archived under the DOI 10.5281/zenodo.3656044.
[ { "created": "Tue, 22 Aug 2023 09:37:48 GMT", "version": "v1" } ]
2023-08-23
[ [ "Moreira-Soares", "Maurício", "" ], [ "Mossmann", "Eduardo", "" ], [ "Travasso", "Rui D. M.", "" ], [ "Bordin", "José Rafael", "" ] ]
Trajectories, sequentially measured quantities that form a path, are an important presence in many different fields, from hadronic beams in physics to electrocardiograms in medicine. Trajectory anal-ysis requires the quantification and classification of curves either using statistical descriptors or physics-based features. To date, there is no extensive and user-friendly package for trajectory anal-ysis available, despite its importance and potential application across domains. We developed a free open-source python package named TrajPy as a complementary tool to empower trajectory analysis. The package showcases a friendly graphic user interface and provides a set of physical descriptors that help characterizing these intricate structures. In combina-tion with image analysis, it was already successfully applied to the study of mitochondrial motility in neuroblastoma cell lines and to the analysis of in silico models for cell migration. The TrajPy package was developed in Python 3 and released under the GNU GPL-3 license. Easy installation is available through PyPi and the development source code can be found in the repository https://github.com/ocbe-uio/TrajPy/. The package release is automatically archived under the DOI 10.5281/zenodo.3656044.