id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2105.08626
Andy Liaw
Robert P. Sheridan, Andy Liaw, Matthew Tudor
Light Gradient Boosting Machine as a Regression Method for Quantitative Structure-Activity Relationships
32 pages, 4 figures
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by/4.0/
In the pharmaceutical industry, where it is common to generate many QSAR models with large numbers of molecules and descriptors, the best QSAR methods are those that can generate the most accurate predictions but that are also insensitive to hyperparameters and are computationally efficient. Here we compare Light Gradient Boosting Machine (LightGBM) to random forest, single-task deep neural nets, and Extreme Gradient Boosting (XGBoost) on 30 in-house data sets. While any boosting algorithm has many adjustable hyperparameters, we can define a set of standard hyperparameters at which LightGBM makes predictions about as accurate as single-task deep neural nets, but is a factor of 1000-fold faster than random forest and ~4-fold faster than XGBoost in terms of total computational time for the largest models. Another very useful feature of LightGBM is that it includes a native method for estimating prediction intervals.
[ { "created": "Wed, 28 Apr 2021 20:19:44 GMT", "version": "v1" } ]
2021-05-19
[ [ "Sheridan", "Robert P.", "" ], [ "Liaw", "Andy", "" ], [ "Tudor", "Matthew", "" ] ]
In the pharmaceutical industry, where it is common to generate many QSAR models with large numbers of molecules and descriptors, the best QSAR methods are those that can generate the most accurate predictions but that are also insensitive to hyperparameters and are computationally efficient. Here we compare Light Gradient Boosting Machine (LightGBM) to random forest, single-task deep neural nets, and Extreme Gradient Boosting (XGBoost) on 30 in-house data sets. While any boosting algorithm has many adjustable hyperparameters, we can define a set of standard hyperparameters at which LightGBM makes predictions about as accurate as single-task deep neural nets, but is a factor of 1000-fold faster than random forest and ~4-fold faster than XGBoost in terms of total computational time for the largest models. Another very useful feature of LightGBM is that it includes a native method for estimating prediction intervals.
1306.5709
Adam Marblestone
Adam H. Marblestone, Bradley M. Zamft, Yael G. Maguire, Mikhail G. Shapiro, Thaddeus R. Cybulski, Joshua I. Glaser, Dario Amodei, P. Benjamin Stranges, Reza Kalhor, David A. Dalrymple, Dongjin Seo, Elad Alon, Michel M. Maharbiz, Jose M. Carmena, Jan M. Rabaey, Edward S. Boyden, George M. Church and Konrad P. Kording
Physical Principles for Scalable Neural Recording
null
null
10.3389/fncom.2013.00137
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Simultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience. Entirely new approaches may be required, motivating an analysis of the fundamental physical constraints on the problem. We outline the physical principles governing brain activity mapping using optical, electrical,magnetic resonance, and molecular modalities of neural recording. Focusing on the mouse brain, we analyze the scalability of each method, concentrating on the limitations imposed by spatiotemporal resolution, energy dissipation, and volume displacement. We also study the physics of powering and communicating with microscale devices embedded in brain tissue.
[ { "created": "Mon, 24 Jun 2013 19:04:50 GMT", "version": "v1" }, { "created": "Tue, 25 Jun 2013 19:08:53 GMT", "version": "v2" }, { "created": "Mon, 1 Jul 2013 19:53:59 GMT", "version": "v3" }, { "created": "Wed, 3 Jul 2013 15:10:29 GMT", "version": "v4" }, { "cre...
2020-02-04
[ [ "Marblestone", "Adam H.", "" ], [ "Zamft", "Bradley M.", "" ], [ "Maguire", "Yael G.", "" ], [ "Shapiro", "Mikhail G.", "" ], [ "Cybulski", "Thaddeus R.", "" ], [ "Glaser", "Joshua I.", "" ], [ "Amodei", "Dario...
Simultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience. Entirely new approaches may be required, motivating an analysis of the fundamental physical constraints on the problem. We outline the physical principles governing brain activity mapping using optical, electrical,magnetic resonance, and molecular modalities of neural recording. Focusing on the mouse brain, we analyze the scalability of each method, concentrating on the limitations imposed by spatiotemporal resolution, energy dissipation, and volume displacement. We also study the physics of powering and communicating with microscale devices embedded in brain tissue.
2107.11740
Weihua Deng Professor
Chongcan Li, Yong Cong, and Weihua Deng
Identifying the fragment structure of the organic compounds by deeply learning the original NMR data
12 pages, 8 figures
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We preprocess the raw NMR spectrum and extract key characteristic features by using two different methodologies, called equidistant sampling and peak sampling for subsequent substructure pattern recognition; meanwhile may provide the alternative strategy to address the imbalance issue of the NMR dataset frequently encountered in dataset collection of statistical modeling and establish two conventional SVM and KNN models to assess the capability of two feature selection, respectively. Our results in this study show that the models using the selected features of peak sampling outperform the ones using the other. Then we build the Recurrent Neural Network (RNN) model trained by Data B collected from peak sampling. Furthermore, we illustrate the easier optimization of hyper parameters and the better generalization ability of the RNN deep learning model by comparison with traditional machine learning SVM and KNN models in detail.
[ { "created": "Sun, 25 Jul 2021 06:45:46 GMT", "version": "v1" } ]
2021-07-27
[ [ "Li", "Chongcan", "" ], [ "Cong", "Yong", "" ], [ "Deng", "Weihua", "" ] ]
We preprocess the raw NMR spectrum and extract key characteristic features by using two different methodologies, called equidistant sampling and peak sampling for subsequent substructure pattern recognition; meanwhile may provide the alternative strategy to address the imbalance issue of the NMR dataset frequently encountered in dataset collection of statistical modeling and establish two conventional SVM and KNN models to assess the capability of two feature selection, respectively. Our results in this study show that the models using the selected features of peak sampling outperform the ones using the other. Then we build the Recurrent Neural Network (RNN) model trained by Data B collected from peak sampling. Furthermore, we illustrate the easier optimization of hyper parameters and the better generalization ability of the RNN deep learning model by comparison with traditional machine learning SVM and KNN models in detail.
1912.12248
Chandre Dharma-wardana
M. W. C. Dharma-wardana (NRC Canada)
Discussion on a "Dynamic model to conceptualize multiple environmental pathways to the epidemic of Chronic Kidney Disease of unknown etiology (CKDu)"
two figures
null
null
null
q-bio.QM nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Jayasinghe et al. [Science of the Total Environment, 705 (2020) 135766] propose a 'dynamical' model of Chronic Kidney Disease of Unknown etiology (CKDu) wherein CKDu arises as an emergent property of a complex system where they claim that weak multiple environmental factors contribute. They criticize the usual approaches as being "reductionist". We use their model as a basis of a discussion on the possibility of treating CKDu as an emergent property resulting from the interaction of multiple weak environmental factors with the organism. The model does not reveal anything beyond what is already known from simple considerations of well-known feed-back loops, but has the merit of re-stating those issues in a different format. If a proper weighting of the possible environmental factors is included, most proposed environmental factors drop out and what Jayasinghe et al. call the "reductionist" approach naturally arises due to the weight of evidence. The theory that the consumption of water containing fluoride and magnesium ions as found in water from regolith aquifers drawn via house-hold wells is found to clearly hold within this model when proper weighting is included. However, we show by examples that such models can be easily misused, leading to completely misleading conclusions. A response formalism useful in the theory of complex systems and emergent modes is presented in the context of the current problem. In addition to there being a lack of adequate data to fully implement such a theory, it is seen that such elaborations are unnecessary and misleading in the present context.
[ { "created": "Fri, 27 Dec 2019 17:18:09 GMT", "version": "v1" }, { "created": "Mon, 6 Jan 2020 15:54:40 GMT", "version": "v2" } ]
2020-01-07
[ [ "Dharma-wardana", "M. W. C.", "", "NRC Canada" ] ]
Jayasinghe et al. [Science of the Total Environment, 705 (2020) 135766] propose a 'dynamical' model of Chronic Kidney Disease of Unknown etiology (CKDu) wherein CKDu arises as an emergent property of a complex system where they claim that weak multiple environmental factors contribute. They criticize the usual approaches as being "reductionist". We use their model as a basis of a discussion on the possibility of treating CKDu as an emergent property resulting from the interaction of multiple weak environmental factors with the organism. The model does not reveal anything beyond what is already known from simple considerations of well-known feed-back loops, but has the merit of re-stating those issues in a different format. If a proper weighting of the possible environmental factors is included, most proposed environmental factors drop out and what Jayasinghe et al. call the "reductionist" approach naturally arises due to the weight of evidence. The theory that the consumption of water containing fluoride and magnesium ions as found in water from regolith aquifers drawn via house-hold wells is found to clearly hold within this model when proper weighting is included. However, we show by examples that such models can be easily misused, leading to completely misleading conclusions. A response formalism useful in the theory of complex systems and emergent modes is presented in the context of the current problem. In addition to there being a lack of adequate data to fully implement such a theory, it is seen that such elaborations are unnecessary and misleading in the present context.
1907.03612
Michael Cole
Takuya Ito, Luke Hearne, Ravi Mill, Carrisa Cocuzza, Michael W. Cole
Discovering the Computational Relevance of Brain Network Organization
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Understanding neurocognitive computations will require not just localizing cognitive information distributed throughout the brain but also determining how that information got there. We review recent advances in linking empirical and simulated brain network organization with cognitive information processing. Building on these advances, we offer a new framework for understanding the role of connectivity in cognition - network coding (encoding/decoding) models. These models utilize connectivity to specify the transfer of information via neural activity flow processes, successfully predicting the formation of cognitive representations in empirical neural data. The success of these models supports the possibility that localized neural functions mechanistically emerge (are computed) from distributed activity flow processes that are specified primarily by connectivity patterns.
[ { "created": "Mon, 8 Jul 2019 13:41:45 GMT", "version": "v1" }, { "created": "Mon, 21 Oct 2019 14:31:20 GMT", "version": "v2" } ]
2019-10-22
[ [ "Ito", "Takuya", "" ], [ "Hearne", "Luke", "" ], [ "Mill", "Ravi", "" ], [ "Cocuzza", "Carrisa", "" ], [ "Cole", "Michael W.", "" ] ]
Understanding neurocognitive computations will require not just localizing cognitive information distributed throughout the brain but also determining how that information got there. We review recent advances in linking empirical and simulated brain network organization with cognitive information processing. Building on these advances, we offer a new framework for understanding the role of connectivity in cognition - network coding (encoding/decoding) models. These models utilize connectivity to specify the transfer of information via neural activity flow processes, successfully predicting the formation of cognitive representations in empirical neural data. The success of these models supports the possibility that localized neural functions mechanistically emerge (are computed) from distributed activity flow processes that are specified primarily by connectivity patterns.
2201.11868
Sarah McIntyre
Anne Margarette S. Maallo (1), Basil Duvernoy (1), H{\aa}kan Olausson (1), Sarah McIntyre (1) ((1) Center for Social and Affective Neuroscience, Link\"oping University, Sweden)
Naturalistic stimuli in touch research
16 pages, 1 figure, commentary/review paper. Keywords: touch, haptics, sensory systems, naturalistic stimuli, social touch
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Neural mechanisms of touch are typically studied in laboratory settings using robotic or other types of well-controlled devices. Such stimuli are very different from highly complex naturalistic human-to-human touch interactions. The lack of scientifically useful naturalistic stimuli hampers progress, particularly in social touch research. Vision science, on the other hand, has benefitted from inventions such as virtual reality systems that have provided researchers with precision control of naturalistic stimuli. In the field of touch research, producing and manipulating stimuli is particularly challenging due to the complexity of skin mechanics. Here we review the history of touch neuroscience focusing on the contrast between strictly controlled and naturalistic stimuli and compare with vision science. We discuss new methods that may overcome the obstacles with precision-controlled tactile stimuli, and recent successes in naturalistic texture production. In social touch research, precise tracking and measurement of naturalistic human-to-human touch interactions offers exciting new possibilities.
[ { "created": "Wed, 26 Jan 2022 14:01:51 GMT", "version": "v1" }, { "created": "Wed, 20 Apr 2022 16:47:35 GMT", "version": "v2" } ]
2022-04-21
[ [ "Maallo", "Anne Margarette S.", "" ], [ "Duvernoy", "Basil", "" ], [ "Olausson", "Håkan", "" ], [ "McIntyre", "Sarah", "" ] ]
Neural mechanisms of touch are typically studied in laboratory settings using robotic or other types of well-controlled devices. Such stimuli are very different from highly complex naturalistic human-to-human touch interactions. The lack of scientifically useful naturalistic stimuli hampers progress, particularly in social touch research. Vision science, on the other hand, has benefitted from inventions such as virtual reality systems that have provided researchers with precision control of naturalistic stimuli. In the field of touch research, producing and manipulating stimuli is particularly challenging due to the complexity of skin mechanics. Here we review the history of touch neuroscience focusing on the contrast between strictly controlled and naturalistic stimuli and compare with vision science. We discuss new methods that may overcome the obstacles with precision-controlled tactile stimuli, and recent successes in naturalistic texture production. In social touch research, precise tracking and measurement of naturalistic human-to-human touch interactions offers exciting new possibilities.
2306.00041
Wenting Ye
Wenting Ye, Chen Li, Yang Xie, Wen Zhang, Hong-Yu Zhang, Bowen Wang, Debo Cheng, Zaiwen Feng
Causal Intervention for Measuring Confidence in Drug-Target Interaction Prediction
null
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identifying and discovering drug-target interactions(DTIs) are vital steps in drug discovery and development. They play a crucial role in assisting scientists in finding new drugs and accelerating the drug development process. Recently, knowledge graph and knowledge graph embedding (KGE) models have made rapid advancements and demonstrated impressive performance in drug discovery. However, such models lack authenticity and accuracy in drug target identification, leading to an increased misjudgment rate and reduced drug development efficiency. To address these issues, we focus on the problem of drug-target interactions, with knowledge mapping as the core technology. Specifically, a causal intervention-based confidence measure is employed to assess the triplet score to improve the accuracy of the drug-target interaction prediction model. Experimental results demonstrate that the developed confidence measurement method based on causal intervention can significantly enhance the accuracy of DTI link prediction, particularly for high-precision models. The predicted results are more valuable in guiding the design and development of subsequent drug development experiments, thereby significantly improving the efficiency of drug development.
[ { "created": "Wed, 31 May 2023 13:13:45 GMT", "version": "v1" }, { "created": "Tue, 14 Nov 2023 13:36:53 GMT", "version": "v2" } ]
2023-11-15
[ [ "Ye", "Wenting", "" ], [ "Li", "Chen", "" ], [ "Xie", "Yang", "" ], [ "Zhang", "Wen", "" ], [ "Zhang", "Hong-Yu", "" ], [ "Wang", "Bowen", "" ], [ "Cheng", "Debo", "" ], [ "Feng", "Zaiwen", ...
Identifying and discovering drug-target interactions(DTIs) are vital steps in drug discovery and development. They play a crucial role in assisting scientists in finding new drugs and accelerating the drug development process. Recently, knowledge graph and knowledge graph embedding (KGE) models have made rapid advancements and demonstrated impressive performance in drug discovery. However, such models lack authenticity and accuracy in drug target identification, leading to an increased misjudgment rate and reduced drug development efficiency. To address these issues, we focus on the problem of drug-target interactions, with knowledge mapping as the core technology. Specifically, a causal intervention-based confidence measure is employed to assess the triplet score to improve the accuracy of the drug-target interaction prediction model. Experimental results demonstrate that the developed confidence measurement method based on causal intervention can significantly enhance the accuracy of DTI link prediction, particularly for high-precision models. The predicted results are more valuable in guiding the design and development of subsequent drug development experiments, thereby significantly improving the efficiency of drug development.
2303.09084
Trung Phan
Kien T. Pham, Duc M. Nguyen, Duy V. Tran, Vi D. Ao, Huy D. Tran, Tuan K. Do and Trung V. Phan
Stress-Induced Mutagenesis Can Further Boost Population Success in Static Ecology
null
null
null
null
q-bio.PE physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
We have developed a mathematical model that captures stress-induced mutagenesis, a fundamental aspect of pathogenic and neoplastic evolutionary dynamics, on the fitness landscape with multiple relevant genetic traits as a high-dimensional Euclidean space. In this framework, stress-induced mutagenesis manifests as a heterogeneous diffusion process. We show how increasing mutations, and thus reducing exploitation, in a static ecology with fixed carrying capacity and maximum growth rates, can paradoxically boost population size. Remarkably, this unexpected biophysical phenomenon applies universally to any number of traits.
[ { "created": "Thu, 16 Mar 2023 05:04:23 GMT", "version": "v1" } ]
2023-03-17
[ [ "Pham", "Kien T.", "" ], [ "Nguyen", "Duc M.", "" ], [ "Tran", "Duy V.", "" ], [ "Ao", "Vi D.", "" ], [ "Tran", "Huy D.", "" ], [ "Do", "Tuan K.", "" ], [ "Phan", "Trung V.", "" ] ]
We have developed a mathematical model that captures stress-induced mutagenesis, a fundamental aspect of pathogenic and neoplastic evolutionary dynamics, on the fitness landscape with multiple relevant genetic traits as a high-dimensional Euclidean space. In this framework, stress-induced mutagenesis manifests as a heterogeneous diffusion process. We show how increasing mutations, and thus reducing exploitation, in a static ecology with fixed carrying capacity and maximum growth rates, can paradoxically boost population size. Remarkably, this unexpected biophysical phenomenon applies universally to any number of traits.
q-bio/0312013
Cyrill Muratov
Cyrill B. Muratov, Eric Vanden-Eijnden, Weinan E
Noise-driven transition to quasi-deterministic limit cycle dynamics in excitable systems
4 pages, 3 figures submitted to PRL
null
null
null
q-bio.QM q-bio.NC
null
The effect of small-amplitude noise on excitable systems with large time-scale separation is analyzed. It is found that small random perturbations of the fast excitatory variable result in the onset of a quasi-deterministic limit cycle behavior, absent without noise. The limit cycle is established at a critical value of the amplitude of the noise, and its period is nontrivially determined by the relationship between the noise amplitude and the time scale ratio. It is argued that this effect might provide a mechanism by which the function of biological systems operating in noisy environments can be robustly controlled by the level of the noise.
[ { "created": "Wed, 10 Dec 2003 00:44:18 GMT", "version": "v1" } ]
2007-05-23
[ [ "Muratov", "Cyrill B.", "" ], [ "Vanden-Eijnden", "Eric", "" ], [ "E", "Weinan", "" ] ]
The effect of small-amplitude noise on excitable systems with large time-scale separation is analyzed. It is found that small random perturbations of the fast excitatory variable result in the onset of a quasi-deterministic limit cycle behavior, absent without noise. The limit cycle is established at a critical value of the amplitude of the noise, and its period is nontrivially determined by the relationship between the noise amplitude and the time scale ratio. It is argued that this effect might provide a mechanism by which the function of biological systems operating in noisy environments can be robustly controlled by the level of the noise.
1312.6336
Sen Pei
Sen Pei, Shaoting Tang, Shu Yan, Shijin Jiang, Xiao Zhang, Zhiming Zheng
How to enhance the dynamic range of excitatory-inhibitory excitable networks
7 pages, 9 figures
Physical Review E 86 (2), 021909, 2012
10.1103/PhysRevE.86.021909
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the collective dynamics of excitatory-inhibitory excitable networks in response to external stimuli. How to enhance dynamic range, which represents the ability of networks to encode external stimuli, is crucial to many applications. We regard the system as a two-layer network (E-Layer and I-Layer) and explore the criticality and dynamic range on diverse networks. Interestingly, we find that phase transition occurs when the dominant eigenvalue of E-layer's weighted adjacency matrix is exactly one, which is only determined by the topology of E-Layer. Meanwhile, it is shown that dynamic range is maximized at critical state. Based on theoretical analysis, we propose an inhibitory factor for each excitatory node. We suggest that if nodes with high inhibitory factors are cut out from I-Layer, dynamic range could be further enhanced. However, because of the sparseness of networks and passive function of inhibitory nodes, the improvement is relatively small compared tooriginal dynamic range. Even so, this provides a strategy to enhance dynamic range.
[ { "created": "Sun, 22 Dec 2013 03:05:19 GMT", "version": "v1" } ]
2013-12-24
[ [ "Pei", "Sen", "" ], [ "Tang", "Shaoting", "" ], [ "Yan", "Shu", "" ], [ "Jiang", "Shijin", "" ], [ "Zhang", "Xiao", "" ], [ "Zheng", "Zhiming", "" ] ]
We investigate the collective dynamics of excitatory-inhibitory excitable networks in response to external stimuli. How to enhance dynamic range, which represents the ability of networks to encode external stimuli, is crucial to many applications. We regard the system as a two-layer network (E-Layer and I-Layer) and explore the criticality and dynamic range on diverse networks. Interestingly, we find that phase transition occurs when the dominant eigenvalue of E-layer's weighted adjacency matrix is exactly one, which is only determined by the topology of E-Layer. Meanwhile, it is shown that dynamic range is maximized at critical state. Based on theoretical analysis, we propose an inhibitory factor for each excitatory node. We suggest that if nodes with high inhibitory factors are cut out from I-Layer, dynamic range could be further enhanced. However, because of the sparseness of networks and passive function of inhibitory nodes, the improvement is relatively small compared tooriginal dynamic range. Even so, this provides a strategy to enhance dynamic range.
0801.3344
Nicolas Clauvelin
N. Clauvelin, B. Audoly and S. Neukirch
Mechanical response of plectonemic DNA: an analytical solution
14 pages, 4 figures
null
10.1021/ma702713x
null
q-bio.BM
null
We consider an elastic rod model for twisted DNA in the plectonemic regime. The molecule is treated as an impenetrable tube with an effective, adjustable radius. The model is solved analytically and we derive formulas for the contact pressure, twisting moment and geometrical parameters of the supercoiled region. We apply our model to magnetic tweezer experiments of a DNA molecule subjected to a tensile force and a torque, and extract mechanical and geometrical quantities from the linear part of the experimental response curve. These reconstructed values are derived in a self-contained manner, and are found to be consistent with those available in the literature.
[ { "created": "Tue, 22 Jan 2008 11:36:36 GMT", "version": "v1" } ]
2009-11-13
[ [ "Clauvelin", "N.", "" ], [ "Audoly", "B.", "" ], [ "Neukirch", "S.", "" ] ]
We consider an elastic rod model for twisted DNA in the plectonemic regime. The molecule is treated as an impenetrable tube with an effective, adjustable radius. The model is solved analytically and we derive formulas for the contact pressure, twisting moment and geometrical parameters of the supercoiled region. We apply our model to magnetic tweezer experiments of a DNA molecule subjected to a tensile force and a torque, and extract mechanical and geometrical quantities from the linear part of the experimental response curve. These reconstructed values are derived in a self-contained manner, and are found to be consistent with those available in the literature.
1506.07299
Guido Tiana
Guido Tiana
The effect of disorder in the contact probability of elongated conformations of biopolymers
null
Phys. Rev. E 92, 010702 (2015)
10.1103/PhysRevE.92.010702
null
q-bio.BM cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biopolymers are characterized by heterogeneous interactions, and usually perform their biological tasks forming contacts within domains of limited size. Combining polymer theory with a replica approach, we study the scaling properties of the probability of contact formation in random heteropolymers as a function of their linear distance. It is found that close or above the theta--point, it is possible to define a contact probability which is typical (i.e. "self-averaging") for different realizations of the heterogeneous interactions, and which displays an exponential cut--off, dependent on temperature and on the interaction range. In many cases this cut--off is comparable with the typical sizes of domains in biopolymers. While it is well known that disorder causes interesting effects at low temperature, the behavior elucidated in the present study is an example of a non--trivial effect at high temperature.
[ { "created": "Wed, 24 Jun 2015 09:50:05 GMT", "version": "v1" } ]
2015-08-05
[ [ "Tiana", "Guido", "" ] ]
Biopolymers are characterized by heterogeneous interactions, and usually perform their biological tasks forming contacts within domains of limited size. Combining polymer theory with a replica approach, we study the scaling properties of the probability of contact formation in random heteropolymers as a function of their linear distance. It is found that close or above the theta--point, it is possible to define a contact probability which is typical (i.e. "self-averaging") for different realizations of the heterogeneous interactions, and which displays an exponential cut--off, dependent on temperature and on the interaction range. In many cases this cut--off is comparable with the typical sizes of domains in biopolymers. While it is well known that disorder causes interesting effects at low temperature, the behavior elucidated in the present study is an example of a non--trivial effect at high temperature.
2303.16429
Tiago Lubiana
Tiago Lubiana, Rafael Lopes, Pedro Medeiros, Juan Carlo Silva, Andre Nicolau Aquime Goncalves, Vinicius Maracaja-Coutinho, and Helder I Nakaya
Ten Quick Tips for Harnessing the Power of ChatGPT/GPT-4 in Computational Biology
14 pages, 1 figure
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by-sa/4.0/
The rise of advanced chatbots, such as ChatGPT, has sparked curiosity in the scientific community. ChatGPT is a general-purpose chatbot powered by large language models (LLMs) GPT-3.5 and GPT-4, with the potential to impact numerous fields, including computational biology. In this article, we offer ten tips based on our experience with ChatGPT to assist computational biologists in optimizing their workflows. We have collected relevant prompts and reviewed the nascent literature in the field, compiling tips we project to remain pertinent for future ChatGPT and LLM iterations, ranging from code refactoring to scientific writing to prompt engineering. We hope our work will help bioinformaticians to complement their workflows while staying aware of the various implications of using this technology. Additionally, to track new and creative applications for bioinformatics tools such as ChatGPT, we have established a GitHub repository at https://github.com/csbl-br/awesome-compbio-chatgpt. Our belief is that ethical adherence to ChatGPT and other LLMs will increase the efficiency of computational biologists, ultimately advancing the pace of scientific discovery in the life sciences.
[ { "created": "Wed, 29 Mar 2023 03:24:42 GMT", "version": "v1" } ]
2023-03-30
[ [ "Lubiana", "Tiago", "" ], [ "Lopes", "Rafael", "" ], [ "Medeiros", "Pedro", "" ], [ "Silva", "Juan Carlo", "" ], [ "Goncalves", "Andre Nicolau Aquime", "" ], [ "Maracaja-Coutinho", "Vinicius", "" ], [ "Nakaya", ...
The rise of advanced chatbots, such as ChatGPT, has sparked curiosity in the scientific community. ChatGPT is a general-purpose chatbot powered by large language models (LLMs) GPT-3.5 and GPT-4, with the potential to impact numerous fields, including computational biology. In this article, we offer ten tips based on our experience with ChatGPT to assist computational biologists in optimizing their workflows. We have collected relevant prompts and reviewed the nascent literature in the field, compiling tips we project to remain pertinent for future ChatGPT and LLM iterations, ranging from code refactoring to scientific writing to prompt engineering. We hope our work will help bioinformaticians to complement their workflows while staying aware of the various implications of using this technology. Additionally, to track new and creative applications for bioinformatics tools such as ChatGPT, we have established a GitHub repository at https://github.com/csbl-br/awesome-compbio-chatgpt. Our belief is that ethical adherence to ChatGPT and other LLMs will increase the efficiency of computational biologists, ultimately advancing the pace of scientific discovery in the life sciences.
q-bio/0703066
Supratim Sengupta
Supratim Sengupta, Xiaoguang Yang, Paul G. Higgs
The Mechanisms of Codon Reassignments in Mitochondrial Genetic Codes
53 pages (45 pages, including 4 figures + 8 pages of supplementary information). To appear in J.Mol.Evol
J. Mol. Evol. 64 (2007) 662-688
null
null
q-bio.PE q-bio.GN
null
Many cases of non-standard genetic codes are known in mitochondrial genomes. We carry out analysis of phylogeny and codon usage of organisms for which the complete mitochondrial genome is available, and we determine the most likely mechanism for codon reassignment in each case. Reassignment events can be classified according to the gain-loss framework. The gain represents the appearance of a new tRNA for the reassigned codon or the change of an existing tRNA such that it gains the ability to pair with the codon. The loss represents the deletion of a tRNA or the change in a tRNA so that it no longer translates the codon. One possible mechanism is Codon Disappearance, where the codon disappears from the genome prior to the gain and loss events. In the alternative mechanisms the codon does not disappear. In the Unassigned Codon mechanism, the loss occurs first, whereas in the Ambiguous Intermediate mechanism, the gain occurs first. Codon usage analysis gives clear evidence of cases where the codon disappeared at the point of the reassignment and also cases where it did not disappear. Codon disappearance is the probable explanation for stop to sense reassignments and a small number of reassignments of sense codons. However, the majority of sense to sense reassignments cannot be explained by codon disappearance. In the latter cases, by analysis of the presence or absence of tRNAs in the genome and of the changes in tRNA sequences, it is sometimes possible to distinguish between the Unassigned Codon and Ambiguous Intermediate mechanisms. We emphasize that not all reassignments follow the same scenario and that it is necessary to consider the details of each case carefully.
[ { "created": "Fri, 30 Mar 2007 10:09:57 GMT", "version": "v1" } ]
2007-07-17
[ [ "Sengupta", "Supratim", "" ], [ "Yang", "Xiaoguang", "" ], [ "Higgs", "Paul G.", "" ] ]
Many cases of non-standard genetic codes are known in mitochondrial genomes. We carry out analysis of phylogeny and codon usage of organisms for which the complete mitochondrial genome is available, and we determine the most likely mechanism for codon reassignment in each case. Reassignment events can be classified according to the gain-loss framework. The gain represents the appearance of a new tRNA for the reassigned codon or the change of an existing tRNA such that it gains the ability to pair with the codon. The loss represents the deletion of a tRNA or the change in a tRNA so that it no longer translates the codon. One possible mechanism is Codon Disappearance, where the codon disappears from the genome prior to the gain and loss events. In the alternative mechanisms the codon does not disappear. In the Unassigned Codon mechanism, the loss occurs first, whereas in the Ambiguous Intermediate mechanism, the gain occurs first. Codon usage analysis gives clear evidence of cases where the codon disappeared at the point of the reassignment and also cases where it did not disappear. Codon disappearance is the probable explanation for stop to sense reassignments and a small number of reassignments of sense codons. However, the majority of sense to sense reassignments cannot be explained by codon disappearance. In the latter cases, by analysis of the presence or absence of tRNAs in the genome and of the changes in tRNA sequences, it is sometimes possible to distinguish between the Unassigned Codon and Ambiguous Intermediate mechanisms. We emphasize that not all reassignments follow the same scenario and that it is necessary to consider the details of each case carefully.
1705.01436
Sebastian James
Sebastian James, Olivia A. Bell, Muhammed A. M. Nazli, Rachel E. Pearce, Jonathan Spencer, Katie Tyrrell, Phillip J. Paine, Timothy J. Heaton, Sean Anderson, Mauro Da Lio, Kevin Gurney
Target-distractor Synchrony Affects Performance in a Novel Motor Task for Studying Action Selection
28 pages, 12 figures, journal article
null
10.1371/journal.pone.0176945
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The study of action selection in humans can present challenges of task design since our actions are usually defined by many degrees of freedom and therefore occupy a large action-space. While saccadic eye-movement offers a more constrained paradigm for investigating action selection, the study of reach-and-grasp in upper limbs has often been defined by more complex scenarios, not easily interpretable in terms of such selection. Here we present a novel motor behaviour task which addresses this by limiting the action space to a single degree of freedom in which subjects have to track (using a stylus) a vertical coloured target line displayed on a tablet computer, whilst ignoring a similarly oriented distractor line in a different colour. We ran this task with 55 subjects and showed that, in agreement with previous studies, the presence of the distractor generally increases the movement latency and directional error rate. Further, we used two distractor conditions according to whether the distractor's location changes asynchronously or synchronously with the location of the target. We found that the asynchronous distractor yielded poorer performance than its synchronous counterpart, with significantly higher movement latencies and higher error rates. We interpret these results in an action selection framework with two actions (move left or right) and competing 'action requests' offered by the target and distractor. As such, the results provide insights into action selection performance in humans and supply data for directly constraining future computational models therein.
[ { "created": "Wed, 3 May 2017 14:00:38 GMT", "version": "v1" } ]
2017-05-04
[ [ "James", "Sebastian", "" ], [ "Bell", "Olivia A.", "" ], [ "Nazli", "Muhammed A. M.", "" ], [ "Pearce", "Rachel E.", "" ], [ "Spencer", "Jonathan", "" ], [ "Tyrrell", "Katie", "" ], [ "Paine", "Phillip J.", ...
The study of action selection in humans can present challenges of task design since our actions are usually defined by many degrees of freedom and therefore occupy a large action-space. While saccadic eye-movement offers a more constrained paradigm for investigating action selection, the study of reach-and-grasp in upper limbs has often been defined by more complex scenarios, not easily interpretable in terms of such selection. Here we present a novel motor behaviour task which addresses this by limiting the action space to a single degree of freedom in which subjects have to track (using a stylus) a vertical coloured target line displayed on a tablet computer, whilst ignoring a similarly oriented distractor line in a different colour. We ran this task with 55 subjects and showed that, in agreement with previous studies, the presence of the distractor generally increases the movement latency and directional error rate. Further, we used two distractor conditions according to whether the distractor's location changes asynchronously or synchronously with the location of the target. We found that the asynchronous distractor yielded poorer performance than its synchronous counterpart, with significantly higher movement latencies and higher error rates. We interpret these results in an action selection framework with two actions (move left or right) and competing 'action requests' offered by the target and distractor. As such, the results provide insights into action selection performance in humans and supply data for directly constraining future computational models therein.
q-bio/0612023
Cecile Caretta
C. Caretta Cartozo, D. Garlaschelli, C. Ricotta, M. Barthelemy, G. Caldarelli
Quantifying the taxonomic diversity in real species communities
12 pages, 4 figures
J. Phys. A: Math. Theor. 41, 224012 (2008)
10.1088/1751-8113/41/22/224012
null
q-bio.PE nlin.AO physics.data-an
null
We analyze several florae (collections of plant species populating specific areas) in different geographic and climatic regions. For every list of species we produce a taxonomic classification tree and we consider its statistical properties. We find that regardless of the geographical location, the climate and the environment all species collections have universal statistical properties that we show to be also robust in time. We then compare observed data sets with simulated communities obtained by randomly sampling a large pool of species from all over the world. We find differences in the behavior of the statistical properties of the corresponding taxonomic trees. Our results suggest that it is possible to distinguish quantitatively real species assemblages from random collections and thus demonstrate the existence of correlations between species.
[ { "created": "Wed, 13 Dec 2006 13:56:28 GMT", "version": "v1" } ]
2008-06-13
[ [ "Cartozo", "C. Caretta", "" ], [ "Garlaschelli", "D.", "" ], [ "Ricotta", "C.", "" ], [ "Barthelemy", "M.", "" ], [ "Caldarelli", "G.", "" ] ]
We analyze several florae (collections of plant species populating specific areas) in different geographic and climatic regions. For every list of species we produce a taxonomic classification tree and we consider its statistical properties. We find that regardless of the geographical location, the climate and the environment all species collections have universal statistical properties that we show to be also robust in time. We then compare observed data sets with simulated communities obtained by randomly sampling a large pool of species from all over the world. We find differences in the behavior of the statistical properties of the corresponding taxonomic trees. Our results suggest that it is possible to distinguish quantitatively real species assemblages from random collections and thus demonstrate the existence of correlations between species.
2311.03411
Chenwei Zhang
Chenwei Zhang, Jordan Lovrod, Boyan Beronov, Khanh Dao Duc, Anne Condon
ViDa: Visualizing DNA hybridization trajectories with biophysics-informed deep graph embeddings
Accepted to Machine Learning in Computational Biology as Oral presentation and PMLR acceptance
null
null
null
q-bio.QM cs.AI cs.HC cs.LG q-bio.BM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Visualization tools can help synthetic biologists and molecular programmers understand the complex reactive pathways of nucleic acid reactions, which can be designed for many potential applications and can be modelled using a continuous-time Markov chain (CTMC). Here we present ViDa, a new visualization approach for DNA reaction trajectories that uses a 2D embedding of the secondary structure state space underlying the CTMC model. To this end, we integrate a scattering transform of the secondary structure adjacency, a variational autoencoder, and a nonlinear dimensionality reduction method. We augment the training loss with domain-specific supervised terms that capture both thermodynamic and kinetic features. We assess ViDa on two well-studied DNA hybridization reactions. Our results demonstrate that the domain-specific features lead to significant quality improvements over the state-of-the-art in DNA state space visualization, successfully separating different folding pathways and thus providing useful insights into dominant reaction mechanisms.
[ { "created": "Mon, 6 Nov 2023 05:27:29 GMT", "version": "v1" } ]
2023-11-08
[ [ "Zhang", "Chenwei", "" ], [ "Lovrod", "Jordan", "" ], [ "Beronov", "Boyan", "" ], [ "Duc", "Khanh Dao", "" ], [ "Condon", "Anne", "" ] ]
Visualization tools can help synthetic biologists and molecular programmers understand the complex reactive pathways of nucleic acid reactions, which can be designed for many potential applications and can be modelled using a continuous-time Markov chain (CTMC). Here we present ViDa, a new visualization approach for DNA reaction trajectories that uses a 2D embedding of the secondary structure state space underlying the CTMC model. To this end, we integrate a scattering transform of the secondary structure adjacency, a variational autoencoder, and a nonlinear dimensionality reduction method. We augment the training loss with domain-specific supervised terms that capture both thermodynamic and kinetic features. We assess ViDa on two well-studied DNA hybridization reactions. Our results demonstrate that the domain-specific features lead to significant quality improvements over the state-of-the-art in DNA state space visualization, successfully separating different folding pathways and thus providing useful insights into dominant reaction mechanisms.
1209.3829
Jack Cowan
J D Cowan, J Neuman, and W van Drongelen
Self-organized criticality in a network of interacting neurons
17 pages, 4 figures, submitted to Journal of Statistical Mechanics
null
10.1088/1742-5468/2013/04/P04030
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper contains an analysis of a simple neural network that exhibits self-organized criticality. Such criticality follows from the combination of a simple neural network with an excitatory feedback loop that generates bistability, in combination with an anti-Hebbian synapse in its input pathway. Using the methods of statistical field theory, we show how one can formulate the stochastic dynamics of such a network as the action of a path integral, which we then investigate using renormalization group methods. The results indicate that the network exhibits hysteresis in switching back and forward between its two stable states, each of which loses its stability at a saddle-node bifurcation. The renormalization group analysis shows that the fluctuations in the neighborhood of such bifurcations have the signature of directed percolation. Thus the network states undergo the neural analog of a phase transition in the universality class of directed percolation. The network replicates precisely the behavior of the original sand-pile model of Bak, Tang & Wiesenfeld.
[ { "created": "Tue, 18 Sep 2012 02:25:50 GMT", "version": "v1" } ]
2015-06-11
[ [ "Cowan", "J D", "" ], [ "Neuman", "J", "" ], [ "van Drongelen", "W", "" ] ]
This paper contains an analysis of a simple neural network that exhibits self-organized criticality. Such criticality follows from the combination of a simple neural network with an excitatory feedback loop that generates bistability, in combination with an anti-Hebbian synapse in its input pathway. Using the methods of statistical field theory, we show how one can formulate the stochastic dynamics of such a network as the action of a path integral, which we then investigate using renormalization group methods. The results indicate that the network exhibits hysteresis in switching back and forward between its two stable states, each of which loses its stability at a saddle-node bifurcation. The renormalization group analysis shows that the fluctuations in the neighborhood of such bifurcations have the signature of directed percolation. Thus the network states undergo the neural analog of a phase transition in the universality class of directed percolation. The network replicates precisely the behavior of the original sand-pile model of Bak, Tang & Wiesenfeld.
2208.02552
Amir Jahangiri
Amir Jahangiri, Xiao Han, Dmitry Lesovoy, Tatiana Agback, Peter Agback, Adnane Achour, Vladislav Orekhov
NMR spectrum reconstruction as a pattern recognition problem
null
null
10.1016/j.jmr.2022.107342
null
q-bio.BM physics.bio-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
A new deep neural network based on the WaveNet architecture (WNN) is presented, which is designed to grasp specific patterns in the NMR spectra. When trained at a fixed non-uniform sampling (NUS) schedule, the WNN benefits from pattern recognition of the corresponding point spread function (PSF) pattern produced by each spectral peak resulting in the highest quality and robust reconstruction of the NUS spectra as demonstrated in simulations and exemplified in this work on 2D 1H-15N correlation spectra of three representative globular proteins with different sizes: Ubiquitin (8.6 kDa), Azurin (14 kDa), and Malt1 (44 kDa). The pattern recognition by WNN is also demonstrated for successful virtual homo-decoupling in a 2D methyl 1H-13 HMQC spectrum of MALT1. We demonstrate using WNN that prior knowledge about the NUS schedule, which so far was not fully exploited, can be used for designing new powerful NMR processing techniques that surpass the existing algorithmic methods.
[ { "created": "Thu, 4 Aug 2022 09:49:51 GMT", "version": "v1" } ]
2022-12-05
[ [ "Jahangiri", "Amir", "" ], [ "Han", "Xiao", "" ], [ "Lesovoy", "Dmitry", "" ], [ "Agback", "Tatiana", "" ], [ "Agback", "Peter", "" ], [ "Achour", "Adnane", "" ], [ "Orekhov", "Vladislav", "" ] ]
A new deep neural network based on the WaveNet architecture (WNN) is presented, which is designed to grasp specific patterns in the NMR spectra. When trained at a fixed non-uniform sampling (NUS) schedule, the WNN benefits from pattern recognition of the corresponding point spread function (PSF) pattern produced by each spectral peak resulting in the highest quality and robust reconstruction of the NUS spectra as demonstrated in simulations and exemplified in this work on 2D 1H-15N correlation spectra of three representative globular proteins with different sizes: Ubiquitin (8.6 kDa), Azurin (14 kDa), and Malt1 (44 kDa). The pattern recognition by WNN is also demonstrated for successful virtual homo-decoupling in a 2D methyl 1H-13 HMQC spectrum of MALT1. We demonstrate using WNN that prior knowledge about the NUS schedule, which so far was not fully exploited, can be used for designing new powerful NMR processing techniques that surpass the existing algorithmic methods.
1810.05077
Abdullah Alchihabi
Abdullah Alchihabi, Omer Ekmekci, Baran B. Kivilcim, Sharlene D. Newman, Fatos T. Yarman Vural
On the Brain Networks of Complex Problem Solving
null
null
null
null
q-bio.NC cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Complex problem solving is a high level cognitive process which has been thoroughly studied over the last decade. The Tower of London (TOL) is a task that has been widely used to study problem-solving. In this study, we aim to explore the underlying cognitive network dynamics among anatomical regions of complex problem solving and its sub-phases, namely planning and execution. A new brain network construction model establishing dynamic functional brain networks using fMRI is proposed. The first step of the model is a preprocessing pipeline that manages to decrease the spatial redundancy while increasing the temporal resolution of the fMRI recordings. Then, dynamic brain networks are estimated using artificial neural networks. The network properties of the estimated brain networks are studied in order to identify regions of interest, such as hubs and subgroups of densely connected brain regions. The major similarities and dissimilarities of the network structure of planning and execution phases are highlighted. Our findings show the hubs and clusters of densely interconnected regions during both subtasks. It is observed that there are more hubs during the planning phase compared to the execution phase, and the clusters are more strongly connected during planning compared to execution.
[ { "created": "Wed, 10 Oct 2018 09:22:21 GMT", "version": "v1" } ]
2018-10-12
[ [ "Alchihabi", "Abdullah", "" ], [ "Ekmekci", "Omer", "" ], [ "Kivilcim", "Baran B.", "" ], [ "Newman", "Sharlene D.", "" ], [ "Vural", "Fatos T. Yarman", "" ] ]
Complex problem solving is a high level cognitive process which has been thoroughly studied over the last decade. The Tower of London (TOL) is a task that has been widely used to study problem-solving. In this study, we aim to explore the underlying cognitive network dynamics among anatomical regions of complex problem solving and its sub-phases, namely planning and execution. A new brain network construction model establishing dynamic functional brain networks using fMRI is proposed. The first step of the model is a preprocessing pipeline that manages to decrease the spatial redundancy while increasing the temporal resolution of the fMRI recordings. Then, dynamic brain networks are estimated using artificial neural networks. The network properties of the estimated brain networks are studied in order to identify regions of interest, such as hubs and subgroups of densely connected brain regions. The major similarities and dissimilarities of the network structure of planning and execution phases are highlighted. Our findings show the hubs and clusters of densely interconnected regions during both subtasks. It is observed that there are more hubs during the planning phase compared to the execution phase, and the clusters are more strongly connected during planning compared to execution.
2305.03257
Tom Bertalan
Tianqi Cui, Tom S. Bertalan, Nelson Ndahiro, Pratik Khare, Michael Betenbaugh, Costas Maranas, Ioannis G. Kevrekidis
Data-driven and Physics Informed Modelling of Chinese Hamster Ovary Cell Bioreactors
null
null
null
null
q-bio.QM cs.LG math.DS
http://creativecommons.org/licenses/by/4.0/
Fed-batch culture is an established operation mode for the production of biologics using mammalian cell cultures. Quantitative modeling integrates both kinetics for some key reaction steps and optimization-driven metabolic flux allocation, using flux balance analysis; this is known to lead to certain mathematical inconsistencies. Here, we propose a physically-informed data-driven hybrid model (a "gray box") to learn models of the dynamical evolution of Chinese Hamster Ovary (CHO) cell bioreactors from process data. The approach incorporates physical laws (e.g. mass balances) as well as kinetic expressions for metabolic fluxes. Machine learning (ML) is then used to (a) directly learn evolution equations (black-box modelling); (b) recover unknown physical parameters ("white-box" parameter fitting) or -- importantly -- (c) learn partially unknown kinetic expressions (gray-box modelling). We encode the convex optimization step of the overdetermined metabolic biophysical system as a differentiable, feed-forward layer into our architectures, connecting partial physical knowledge with data-driven machine learning.
[ { "created": "Fri, 5 May 2023 03:09:33 GMT", "version": "v1" } ]
2023-05-08
[ [ "Cui", "Tianqi", "" ], [ "Bertalan", "Tom S.", "" ], [ "Ndahiro", "Nelson", "" ], [ "Khare", "Pratik", "" ], [ "Betenbaugh", "Michael", "" ], [ "Maranas", "Costas", "" ], [ "Kevrekidis", "Ioannis G.", "" ...
Fed-batch culture is an established operation mode for the production of biologics using mammalian cell cultures. Quantitative modeling integrates both kinetics for some key reaction steps and optimization-driven metabolic flux allocation, using flux balance analysis; this is known to lead to certain mathematical inconsistencies. Here, we propose a physically-informed data-driven hybrid model (a "gray box") to learn models of the dynamical evolution of Chinese Hamster Ovary (CHO) cell bioreactors from process data. The approach incorporates physical laws (e.g. mass balances) as well as kinetic expressions for metabolic fluxes. Machine learning (ML) is then used to (a) directly learn evolution equations (black-box modelling); (b) recover unknown physical parameters ("white-box" parameter fitting) or -- importantly -- (c) learn partially unknown kinetic expressions (gray-box modelling). We encode the convex optimization step of the overdetermined metabolic biophysical system as a differentiable, feed-forward layer into our architectures, connecting partial physical knowledge with data-driven machine learning.
1205.0665
Cencini Massimo Dr.
Massimo Cencini, Simone Pigolotti and Miguel A. Mu\~noz
What ecological factors shape species-area curves in neutral models?
20 pages, 5 figures, merged with supplementary information (Accepted on PLoS ONE)
PLoS ONE 7(6): e38232 (2012)
10.1371/journal.pone.0038232
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding factors that shape biodiversity and species coexistence across scales is of utmost importance in ecology, both theoretically and for conservation policies. Species-area relationships (SARs), measuring how the number of observed species increases upon enlarging the sampled area, constitute a convenient tool for quantifying the spatial structure of biodiversity. While general features of species-area curves are quite universal across ecosystems, some quantitative aspects can change significantly. Several attempts have been made to link these variations to ecological forces. Within the framework of spatially explicit neutral models, here we scrutinize the effect of varying the local population size (i.e. the number of individuals per site) and the level of habitat saturation (allowing for empty sites). We conclude that species-area curves become shallower when the local population size increases, while habitat saturation, unless strongly violated, plays a marginal role. Our findings provide a plausible explanation of why SARs for microorganisms are flatter than those for larger organisms.
[ { "created": "Thu, 3 May 2012 09:55:49 GMT", "version": "v1" } ]
2012-06-06
[ [ "Cencini", "Massimo", "" ], [ "Pigolotti", "Simone", "" ], [ "Muñoz", "Miguel A.", "" ] ]
Understanding factors that shape biodiversity and species coexistence across scales is of utmost importance in ecology, both theoretically and for conservation policies. Species-area relationships (SARs), measuring how the number of observed species increases upon enlarging the sampled area, constitute a convenient tool for quantifying the spatial structure of biodiversity. While general features of species-area curves are quite universal across ecosystems, some quantitative aspects can change significantly. Several attempts have been made to link these variations to ecological forces. Within the framework of spatially explicit neutral models, here we scrutinize the effect of varying the local population size (i.e. the number of individuals per site) and the level of habitat saturation (allowing for empty sites). We conclude that species-area curves become shallower when the local population size increases, while habitat saturation, unless strongly violated, plays a marginal role. Our findings provide a plausible explanation of why SARs for microorganisms are flatter than those for larger organisms.
1708.08407
Jinbo Xu
Sheng Wang, Zhen Li, Yizhou Yu and Jinbo Xu
Folding membrane proteins by deep transfer learning
null
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computational elucidation of membrane protein (MP) structures is challenging partially due to lack of sufficient solved structures for homology modeling. Here we describe a high-throughput deep transfer learning method that first predicts MP contacts by learning from non-membrane proteins (non-MPs) and then predicting three-dimensional structure models using the predicted contacts as distance restraints. Tested on 510 non-redundant MPs, our method has contact prediction accuracy at least 0.18 better than existing methods, predicts correct folds for 218 MPs (TMscore at least 0.6), and generates three-dimensional models with RMSD less than 4 Angstrom and 5 Angstrom for 57 and 108 MPs, respectively. A rigorous blind test in the continuous automated model evaluation (CAMEO) project shows that our method predicted high-resolution three-dimensional models for two recent test MPs of 210 residues with RMSD close to 2 Angstrom. We estimated that our method could predict correct folds for between 1,345 and 1,871 reviewed human multi-pass MPs including a few hundred new folds, which shall facilitate the discovery of drugs targeting at membrane proteins.
[ { "created": "Mon, 28 Aug 2017 16:38:52 GMT", "version": "v1" } ]
2017-08-29
[ [ "Wang", "Sheng", "" ], [ "Li", "Zhen", "" ], [ "Yu", "Yizhou", "" ], [ "Xu", "Jinbo", "" ] ]
Computational elucidation of membrane protein (MP) structures is challenging partially due to lack of sufficient solved structures for homology modeling. Here we describe a high-throughput deep transfer learning method that first predicts MP contacts by learning from non-membrane proteins (non-MPs) and then predicting three-dimensional structure models using the predicted contacts as distance restraints. Tested on 510 non-redundant MPs, our method has contact prediction accuracy at least 0.18 better than existing methods, predicts correct folds for 218 MPs (TMscore at least 0.6), and generates three-dimensional models with RMSD less than 4 Angstrom and 5 Angstrom for 57 and 108 MPs, respectively. A rigorous blind test in the continuous automated model evaluation (CAMEO) project shows that our method predicted high-resolution three-dimensional models for two recent test MPs of 210 residues with RMSD close to 2 Angstrom. We estimated that our method could predict correct folds for between 1,345 and 1,871 reviewed human multi-pass MPs including a few hundred new folds, which shall facilitate the discovery of drugs targeting at membrane proteins.
2004.01665
Fidel Santamaria
Horacio G. Rotstein and Fidel Santamaria
Present and future frameworks of theoretical neuroscience: outcomes of a community discussion
Workshop outcomes, 9 pages
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We organized a workshop on the "Present and Future Frameworks of Theoretical Neuroscience", with the support of the National Science Foundation. The objective was to identify the challenges and strategies that this field will need to tackle in order to incorporate vast and multi-scale streams of experimental data from the technologies developed by the BRAIN initiative. The participants, divided in workgroups, identified five key areas that, while not exhaustive, cover multiple aspects of current challenges needed to be developed: Dynamics-statistics; multi-scale integration; coding; brain-body integration; and structure of neuroscience theories. While each area is different, there were coincidences on finding theoretical paths to incorporate biophysics, energetics, and ethology with more abstract coding and computational approaches. Each workgroup has continued to work after the meeting to develop the ideas seeded there, which are started to being published. Here, we provide a perspective of the discussions of each workgroup that point to building on the present foundations of theoretical neuroscience and extend them by incorporating multi-scale information with the objective of providing mechanistic insights into the nervous system.
[ { "created": "Fri, 3 Apr 2020 16:38:31 GMT", "version": "v1" } ]
2020-04-06
[ [ "Rotstein", "Horacio G.", "" ], [ "Santamaria", "Fidel", "" ] ]
We organized a workshop on the "Present and Future Frameworks of Theoretical Neuroscience", with the support of the National Science Foundation. The objective was to identify the challenges and strategies that this field will need to tackle in order to incorporate vast and multi-scale streams of experimental data from the technologies developed by the BRAIN initiative. The participants, divided in workgroups, identified five key areas that, while not exhaustive, cover multiple aspects of current challenges needed to be developed: Dynamics-statistics; multi-scale integration; coding; brain-body integration; and structure of neuroscience theories. While each area is different, there were coincidences on finding theoretical paths to incorporate biophysics, energetics, and ethology with more abstract coding and computational approaches. Each workgroup has continued to work after the meeting to develop the ideas seeded there, which are started to being published. Here, we provide a perspective of the discussions of each workgroup that point to building on the present foundations of theoretical neuroscience and extend them by incorporating multi-scale information with the objective of providing mechanistic insights into the nervous system.
2201.05262
Adam Svahn
Adam J. Svahn, Sheryl L. Chang, Rebecca J. Rockett, Oliver M. Cliff, Qinning Wang, Alicia Arnott, Marc Ramsperger, Tania C. Sorrell, Vitali Sintchenko, Mikhail Prokopenko
Genome-wide networks reveal emergence of epidemic strains of Salmonella Enteritidis
null
International Journal of Infectious Diseases, Volume 117 (2022), 65 - 73
10.1016/j.ijid.2022.01.056
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objectives: To enhance monitoring of high-burden foodborne pathogens, there is opportunity to combine pangenome data with network analysis. Methods: Salmonella enterica subspecies Enterica serovar Enteritidis isolates were referred to the New South Wales (NSW) Enteric Reference Laboratory between August 2015 and December 2019 (1033 isolates in total), inclusive of a confirmed outbreak. All isolates underwent whole genome sequencing. Distances between genomes were quantified by in silico MLVA as well as core SNPs, which informed construction of undirected networks. Prevalence-centrality spaces were generated from the undirected networks. Components on the undirected SNP network were considered alongside a phylogenetic tree representation. Results: Outbreak isolates were identifiable as distinct components on the MLVA and SNP networks. The MLVA network based centrality/prevalence space did not delineate the outbreak, whereas the outbreak was clearly delineated in the SNP network based centrality/prevalence space. Components on the undirected SNP network showed a high concordance to the SNP clusters based on phylogenetic analysis. Conclusions: Bacterial whole genome data in network based analysis can improve the resolution of population analysis. High concordance of network components and SNP clusters is promising for rapid population analyses of foodborne Salmonella spp. due to the low overhead of network analysis.
[ { "created": "Fri, 14 Jan 2022 00:44:51 GMT", "version": "v1" }, { "created": "Mon, 31 Jan 2022 00:13:53 GMT", "version": "v2" } ]
2022-03-08
[ [ "Svahn", "Adam J.", "" ], [ "Chang", "Sheryl L.", "" ], [ "Rockett", "Rebecca J.", "" ], [ "Cliff", "Oliver M.", "" ], [ "Wang", "Qinning", "" ], [ "Arnott", "Alicia", "" ], [ "Ramsperger", "Marc", "" ], ...
Objectives: To enhance monitoring of high-burden foodborne pathogens, there is opportunity to combine pangenome data with network analysis. Methods: Salmonella enterica subspecies Enterica serovar Enteritidis isolates were referred to the New South Wales (NSW) Enteric Reference Laboratory between August 2015 and December 2019 (1033 isolates in total), inclusive of a confirmed outbreak. All isolates underwent whole genome sequencing. Distances between genomes were quantified by in silico MLVA as well as core SNPs, which informed construction of undirected networks. Prevalence-centrality spaces were generated from the undirected networks. Components on the undirected SNP network were considered alongside a phylogenetic tree representation. Results: Outbreak isolates were identifiable as distinct components on the MLVA and SNP networks. The MLVA network based centrality/prevalence space did not delineate the outbreak, whereas the outbreak was clearly delineated in the SNP network based centrality/prevalence space. Components on the undirected SNP network showed a high concordance to the SNP clusters based on phylogenetic analysis. Conclusions: Bacterial whole genome data in network based analysis can improve the resolution of population analysis. High concordance of network components and SNP clusters is promising for rapid population analyses of foodborne Salmonella spp. due to the low overhead of network analysis.
0912.4502
Steven Kelk
Leo van Iersel and Steven Kelk
A short note on the tractability of constructing phylogenetic networks from clusters
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In [2] it was proven that the Cass algorithm is a polynomial-time algorithm for constructing level<=2 networks from clusters. Here we demonstrate, for each k>=0, a polynomial-time algorithm for constructing level-k phylogenetic networks from clusters. Unlike Cass the algorithm scheme given here is only of theoretical interest. It does, however, strengthen the hope that efficient polynomial-time algorithms (and perhaps fixed parameter tractable algorithms) exist for this problem.
[ { "created": "Tue, 22 Dec 2009 20:29:29 GMT", "version": "v1" } ]
2009-12-23
[ [ "van Iersel", "Leo", "" ], [ "Kelk", "Steven", "" ] ]
In [2] it was proven that the Cass algorithm is a polynomial-time algorithm for constructing level<=2 networks from clusters. Here we demonstrate, for each k>=0, a polynomial-time algorithm for constructing level-k phylogenetic networks from clusters. Unlike Cass the algorithm scheme given here is only of theoretical interest. It does, however, strengthen the hope that efficient polynomial-time algorithms (and perhaps fixed parameter tractable algorithms) exist for this problem.
2401.04745
Hadi Mahmodi
Hadi Mahmodi, Christopher G. Poulton, Mathew N. Lesley, Glenn Oldham, Hui Xin Ong, Steven J. Langford, Irina V. Kabakova
Principal Component Analysis in Application to Brillouin Microscopy Data
null
null
null
null
q-bio.QM physics.optics
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Brillouin microscopy has recently emerged as a new bio-imaging modality that provides information on the micromechanical properties of biological materials, cells and tissues. The data collected in a typical Brillouin microscopy experiment represents the high-dimensional set of spectral information. Its analysis requires non-trivial approaches due to subtlety in spectral variations as well as spatial and spectral overlaps of measured features. This article offers a guide to the application of Principal Component Analysis (PCA) for processing Brillouin imaging data. Being unsupervised multivariate analysis, PCA is well-suited to tackle processing of complex Brillouin spectra from heterogeneous biological samples with minimal a priori information requirements. We point out the importance of data pre-processing steps in order to improve outcomes of PCA. We also present a strategy where PCA combined with k-means clustering method can provide a working solution to data reconstruction and deeper insights into sample composition, structure and mechanics.
[ { "created": "Tue, 9 Jan 2024 09:58:01 GMT", "version": "v1" } ]
2024-01-11
[ [ "Mahmodi", "Hadi", "" ], [ "Poulton", "Christopher G.", "" ], [ "Lesley", "Mathew N.", "" ], [ "Oldham", "Glenn", "" ], [ "Ong", "Hui Xin", "" ], [ "Langford", "Steven J.", "" ], [ "Kabakova", "Irina V.", "...
Brillouin microscopy has recently emerged as a new bio-imaging modality that provides information on the micromechanical properties of biological materials, cells and tissues. The data collected in a typical Brillouin microscopy experiment represents the high-dimensional set of spectral information. Its analysis requires non-trivial approaches due to subtlety in spectral variations as well as spatial and spectral overlaps of measured features. This article offers a guide to the application of Principal Component Analysis (PCA) for processing Brillouin imaging data. Being unsupervised multivariate analysis, PCA is well-suited to tackle processing of complex Brillouin spectra from heterogeneous biological samples with minimal a priori information requirements. We point out the importance of data pre-processing steps in order to improve outcomes of PCA. We also present a strategy where PCA combined with k-means clustering method can provide a working solution to data reconstruction and deeper insights into sample composition, structure and mechanics.
2004.14533
Krishna Dasaratha
Krishna Dasaratha
Virus Dynamics with Behavioral Responses
null
null
null
null
q-bio.PE econ.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by epidemics such as COVID-19, we study the spread of a contagious disease when behavior responds to the disease's prevalence. We extend the SIR epidemiological model to include endogenous meeting rates. Individuals benefit from economic activity, but activity involves interactions with potentially infected individuals. The main focus is a theoretical analysis of contagion dynamics and behavioral responses to changes in risk. We obtain a simple condition for when public-health interventions or variants of a disease will have paradoxical effects on infection rates due to risk compensation. Behavioral responses are most likely to undermine public-health interventions near the peak of severe diseases.
[ { "created": "Thu, 30 Apr 2020 01:16:31 GMT", "version": "v1" }, { "created": "Tue, 12 May 2020 03:03:15 GMT", "version": "v2" }, { "created": "Fri, 19 Jun 2020 13:58:23 GMT", "version": "v3" }, { "created": "Thu, 3 Feb 2022 18:58:51 GMT", "version": "v4" }, { "cr...
2023-09-25
[ [ "Dasaratha", "Krishna", "" ] ]
Motivated by epidemics such as COVID-19, we study the spread of a contagious disease when behavior responds to the disease's prevalence. We extend the SIR epidemiological model to include endogenous meeting rates. Individuals benefit from economic activity, but activity involves interactions with potentially infected individuals. The main focus is a theoretical analysis of contagion dynamics and behavioral responses to changes in risk. We obtain a simple condition for when public-health interventions or variants of a disease will have paradoxical effects on infection rates due to risk compensation. Behavioral responses are most likely to undermine public-health interventions near the peak of severe diseases.
1806.07365
Trang-Anh Estelle Nghiem
Trang-Anh Nghiem, Jean-Marc Lina, Matteo di Volo, Cristiano Capone, Alan C. Evans, Alain Destexhe, and Jennifer S. Goldman
State equation from the spectral structure of human brain activity
6 pages, 4 figures
null
null
null
q-bio.NC cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural electromagnetic (EM) signals recorded non-invasively from individual human subjects vary in complexity and magnitude. Nonetheless, variation in neural activity has been difficult to quantify and interpret, due to complex, broad-band features in the frequency domain. Studying signals recorded with magnetoencephalography (MEG) from healthy young adult subjects while in resting and active states, a systematic framework inspired by thermodynamics is applied to neural EM signals. Despite considerable inter-subject variation in terms of spectral entropy and energy across time epochs, data support the existence of a robust and linear relationship defining an effective state equation, with higher energy and lower entropy in the resting state compared to active, consistently across subjects. Mechanisms underlying the emergence of relationships between empirically measured effective state functions are further investigated using a model network of coupled oscillators, suggesting an interplay between noise and coupling strength can account for coherent variation of empirically observed quantities. Taken together, the results show macroscopic neural observables follow a robust, non-trivial conservation rule for energy modulation and information generation.
[ { "created": "Tue, 19 Jun 2018 17:49:40 GMT", "version": "v1" }, { "created": "Tue, 3 Jul 2018 14:45:06 GMT", "version": "v2" } ]
2018-07-04
[ [ "Nghiem", "Trang-Anh", "" ], [ "Lina", "Jean-Marc", "" ], [ "di Volo", "Matteo", "" ], [ "Capone", "Cristiano", "" ], [ "Evans", "Alan C.", "" ], [ "Destexhe", "Alain", "" ], [ "Goldman", "Jennifer S.", "" ...
Neural electromagnetic (EM) signals recorded non-invasively from individual human subjects vary in complexity and magnitude. Nonetheless, variation in neural activity has been difficult to quantify and interpret, due to complex, broad-band features in the frequency domain. Studying signals recorded with magnetoencephalography (MEG) from healthy young adult subjects while in resting and active states, a systematic framework inspired by thermodynamics is applied to neural EM signals. Despite considerable inter-subject variation in terms of spectral entropy and energy across time epochs, data support the existence of a robust and linear relationship defining an effective state equation, with higher energy and lower entropy in the resting state compared to active, consistently across subjects. Mechanisms underlying the emergence of relationships between empirically measured effective state functions are further investigated using a model network of coupled oscillators, suggesting an interplay between noise and coupling strength can account for coherent variation of empirically observed quantities. Taken together, the results show macroscopic neural observables follow a robust, non-trivial conservation rule for energy modulation and information generation.
1601.07970
Seung Ki Baek
Seung Ki Baek, Hyeong-Chai Jeong, Christian Hilbe, and Martin A. Nowak
Comparing reactive and memory-one strategies of direct reciprocity
18 pages, 7 figures
Sci. Rep. 6, 25676 (2016)
10.1038/srep25676
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Direct reciprocity is a mechanism for the evolution of cooperation based on repeated interactions. When individuals meet repeatedly, they can use conditional strategies to enforce cooperative outcomes that would not be feasible in one-shot social dilemmas. Direct reciprocity requires that individuals keep track of their past interactions and find the right response. However, there are natural bounds on strategic complexity: Humans find it difficult to remember past interactions accurately, especially over long timespans. Given these limitations, it is natural to ask how complex strategies need to be for cooperation to evolve. Here, we study stochastic evolutionary game dynamics in finite populations to systematically compare the evolutionary performance of reactive strategies, which only respond to the co-player's previous move, and memory-one strategies, which take into account the own and the co-player's previous move. In both cases, we compare deterministic strategy and stochastic strategy spaces. For reactive strategies and small costs, we find that stochasticity benefits cooperation, because it allows for generous-tit-for-tat. For memory one strategies and small costs, we find that stochasticity does not increase the propensity for cooperation, because the deterministic rule of win-stay, lose-shift works best. For memory one strategies and large costs, however, stochasticity can augment cooperation.
[ { "created": "Fri, 29 Jan 2016 02:57:37 GMT", "version": "v1" }, { "created": "Mon, 21 Mar 2016 05:17:49 GMT", "version": "v2" }, { "created": "Mon, 23 May 2016 05:49:42 GMT", "version": "v3" } ]
2016-05-24
[ [ "Baek", "Seung Ki", "" ], [ "Jeong", "Hyeong-Chai", "" ], [ "Hilbe", "Christian", "" ], [ "Nowak", "Martin A.", "" ] ]
Direct reciprocity is a mechanism for the evolution of cooperation based on repeated interactions. When individuals meet repeatedly, they can use conditional strategies to enforce cooperative outcomes that would not be feasible in one-shot social dilemmas. Direct reciprocity requires that individuals keep track of their past interactions and find the right response. However, there are natural bounds on strategic complexity: Humans find it difficult to remember past interactions accurately, especially over long timespans. Given these limitations, it is natural to ask how complex strategies need to be for cooperation to evolve. Here, we study stochastic evolutionary game dynamics in finite populations to systematically compare the evolutionary performance of reactive strategies, which only respond to the co-player's previous move, and memory-one strategies, which take into account the own and the co-player's previous move. In both cases, we compare deterministic strategy and stochastic strategy spaces. For reactive strategies and small costs, we find that stochasticity benefits cooperation, because it allows for generous-tit-for-tat. For memory one strategies and small costs, we find that stochasticity does not increase the propensity for cooperation, because the deterministic rule of win-stay, lose-shift works best. For memory one strategies and large costs, however, stochasticity can augment cooperation.
1708.09273
Dhananjay Suresh
Angela B. Javurek, Dhananjay Suresh, William G. Spollen, Marcia L. Hart, Sarah A. Hansen, Mark R. Ellersieck, Nathan J. Bivens, Scott A. Givan, Anandhi Upendran, Raghuraman Kannan, Cheryl S. Rosenfeld
Gut Dysbiosis and Neurobehavioral Alterations in Rats Exposed to Silver Nanoparticles
14 figures, 15 pages
Scientific Reports 7, Article number: 2822 (2017)
10.1038/s41598-017-02880-0
null
q-bio.TO q-bio.MN
http://creativecommons.org/licenses/by/4.0/
Due to their antimicrobial properties, silver nanoparticles (AgNPs) are being used in non-edible and edible consumer products. It is not clear though if exposure to these chemicals can exert toxic effects on the host and gut microbiome. Conflicting studies have been reported on whether AgNPs result in gut dysbiosis and other changes within the host. We sought to examine whether exposure of Sprague-Dawley male rats for two weeks to different shapes of AgNPs, cube (AgNC) and sphere (AgNS) affects gut microbiota, select behaviors, and induces histopathological changes in the gastrointestinal system and brain. In the elevated plus maze (EPM), AgNS-exposed rats showed greater number of entries into closed arms and center compared to controls and those exposed to AgNC. AgNS and AgNC treated groups had select reductions in gut microbiota relative to controls. Clostridium spp., Bacteroides uniformis, Christensenellaceae, and Coprococcus eutactus were decreased in AgNC exposed group, whereas, Oscillospira spp., Dehalobacterium spp., Peptococcaeceae, Corynebacterium spp., Aggregatibacter pneumotropica were reduced in AgNS exposed group. Bacterial reductions correlated with select behavioral changes measured in the EPM. No significant histopathological changes were evident in the gastrointestinal system or brain. Findings suggest short-term exposure to AgNS or AgNC can lead to behavioral and gut microbiome changes.
[ { "created": "Thu, 24 Aug 2017 21:47:42 GMT", "version": "v1" } ]
2017-08-31
[ [ "Javurek", "Angela B.", "" ], [ "Suresh", "Dhananjay", "" ], [ "Spollen", "William G.", "" ], [ "Hart", "Marcia L.", "" ], [ "Hansen", "Sarah A.", "" ], [ "Ellersieck", "Mark R.", "" ], [ "Bivens", "Nathan J.",...
Due to their antimicrobial properties, silver nanoparticles (AgNPs) are being used in non-edible and edible consumer products. It is not clear though if exposure to these chemicals can exert toxic effects on the host and gut microbiome. Conflicting studies have been reported on whether AgNPs result in gut dysbiosis and other changes within the host. We sought to examine whether exposure of Sprague-Dawley male rats for two weeks to different shapes of AgNPs, cube (AgNC) and sphere (AgNS) affects gut microbiota, select behaviors, and induces histopathological changes in the gastrointestinal system and brain. In the elevated plus maze (EPM), AgNS-exposed rats showed greater number of entries into closed arms and center compared to controls and those exposed to AgNC. AgNS and AgNC treated groups had select reductions in gut microbiota relative to controls. Clostridium spp., Bacteroides uniformis, Christensenellaceae, and Coprococcus eutactus were decreased in AgNC exposed group, whereas, Oscillospira spp., Dehalobacterium spp., Peptococcaeceae, Corynebacterium spp., Aggregatibacter pneumotropica were reduced in AgNS exposed group. Bacterial reductions correlated with select behavioral changes measured in the EPM. No significant histopathological changes were evident in the gastrointestinal system or brain. Findings suggest short-term exposure to AgNS or AgNC can lead to behavioral and gut microbiome changes.
1311.1555
Xiaohua Zhou
Xiaohua Zhou and Shengli Zhang
Manipulate the coiling and uncoiling movements of Lepidoptera proboscis by its conformation optimizing
7 pages, 6 figures
null
null
null
q-bio.TO cond-mat.mtrl-sci cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many kinds of adult Lepidoptera insects possess a long proboscis which is used to suck liquids and has the coiling and uncoiling movements. Although experiments revealed qualitatively that the coiling movement is governed by the hydraulic mechanism and the uncoiling movement is due to the musculature and the elasticity, it needs a quantitative investigation to reveal how insects achieve these behaviors accurately. Here a quasi-one-dimensional (Q1D) curvature elastica model is proposed to reveal the mechanism of these behaviors. We find that the functions of internal stipes muscle and basal galeal muscle which locate at the bottom of proboscis are to adjust the initial states in the coiling and uncoiling processes, respectively. The function of internal galeal muscle which exists along proboscis is to adjust the line tension. The knee bend shape is due to the local maximal spontaneous curvature and is an advantage for nectar-feeding butterfly. When there is no knee bend, the proboscis of fruit-piercing butterfly is easy to achieve the piercing movement which induced by the increase of internal hydraulic pressure. All of the results are in good agreement with experiential observation. Our study provides a revelatory method to investigate the mechanical behaviors of other 1D biologic structures, such as proboscis of marine snail and elephant. Our method and results are also significant in designing the bionic devices.
[ { "created": "Thu, 7 Nov 2013 01:07:45 GMT", "version": "v1" } ]
2013-11-08
[ [ "Zhou", "Xiaohua", "" ], [ "Zhang", "Shengli", "" ] ]
Many kinds of adult Lepidoptera insects possess a long proboscis which is used to suck liquids and has the coiling and uncoiling movements. Although experiments revealed qualitatively that the coiling movement is governed by the hydraulic mechanism and the uncoiling movement is due to the musculature and the elasticity, it needs a quantitative investigation to reveal how insects achieve these behaviors accurately. Here a quasi-one-dimensional (Q1D) curvature elastica model is proposed to reveal the mechanism of these behaviors. We find that the functions of internal stipes muscle and basal galeal muscle which locate at the bottom of proboscis are to adjust the initial states in the coiling and uncoiling processes, respectively. The function of internal galeal muscle which exists along proboscis is to adjust the line tension. The knee bend shape is due to the local maximal spontaneous curvature and is an advantage for nectar-feeding butterfly. When there is no knee bend, the proboscis of fruit-piercing butterfly is easy to achieve the piercing movement which induced by the increase of internal hydraulic pressure. All of the results are in good agreement with experiential observation. Our study provides a revelatory method to investigate the mechanical behaviors of other 1D biologic structures, such as proboscis of marine snail and elephant. Our method and results are also significant in designing the bionic devices.
1802.01612
Debayan Chakraborty
Debayan Chakraborty, Naoto Hori, and Dave Thirumalai
Sequence-dependent Three Interaction Site (TIS) Model for Single and Double-stranded DNA
null
null
10.1021/acs.jctc.8b00091
null
q-bio.BM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a robust coarse-grained model for single and double stranded DNA by representing each nucleotide by three interaction sites (TIS) located at the centers of mass of sugar, phosphate, and base. The resulting TIS model includes base-stacking, hydrogen bond, and electrostatic interactions as well as bond-stretching and bond angle potentials that account for the polymeric nature of DNA. The choices of force constants for stretching and the bending potentials were guided by a Boltzmann inversion procedure using a large representative set of DNA structures extracted from the Protein Data Bank. Some of the parameters in the stacking interactions were calculated using a learning procedure, which ensured that the experimentally measured melting temperatures of dimers are faithfully reproduced. Without any further adjustments, the calculations based on the TIS model reproduces the experimentally measured salt and sequence dependence of the size of single stranded DNA (ssDNA), as well as the persistence lengths of poly(dA) and poly(dT) chains. Interestingly, upon application of mechanical force the extension of poly(dA) exhibits a plateau, which we trace to the formation of stacked helical domains. In contrast, the force-extension curve (FEC) of poly(dT) is entropic in origin, and could be described by a standard polymer model. We also show that the persistence length of double stranded DNA is consistent with the prediction based on the worm-like chain. The persistence length, which decreases with increasing salt concentration, is in accord with the Odijk-Skolnick-Fixman theory intended for stiff polyelectrolyte chains near the rod limit. The range of applications, which did not require adjusting any parameter after the initial construction based solely on PDB structures and melting profiles of dimers, attests to the transferability and robustness of the TIS model for ssDNA and dsDNA.
[ { "created": "Mon, 5 Feb 2018 19:22:43 GMT", "version": "v1" } ]
2018-08-01
[ [ "Chakraborty", "Debayan", "" ], [ "Hori", "Naoto", "" ], [ "Thirumalai", "Dave", "" ] ]
We develop a robust coarse-grained model for single and double stranded DNA by representing each nucleotide by three interaction sites (TIS) located at the centers of mass of sugar, phosphate, and base. The resulting TIS model includes base-stacking, hydrogen bond, and electrostatic interactions as well as bond-stretching and bond angle potentials that account for the polymeric nature of DNA. The choices of force constants for stretching and the bending potentials were guided by a Boltzmann inversion procedure using a large representative set of DNA structures extracted from the Protein Data Bank. Some of the parameters in the stacking interactions were calculated using a learning procedure, which ensured that the experimentally measured melting temperatures of dimers are faithfully reproduced. Without any further adjustments, the calculations based on the TIS model reproduces the experimentally measured salt and sequence dependence of the size of single stranded DNA (ssDNA), as well as the persistence lengths of poly(dA) and poly(dT) chains. Interestingly, upon application of mechanical force the extension of poly(dA) exhibits a plateau, which we trace to the formation of stacked helical domains. In contrast, the force-extension curve (FEC) of poly(dT) is entropic in origin, and could be described by a standard polymer model. We also show that the persistence length of double stranded DNA is consistent with the prediction based on the worm-like chain. The persistence length, which decreases with increasing salt concentration, is in accord with the Odijk-Skolnick-Fixman theory intended for stiff polyelectrolyte chains near the rod limit. The range of applications, which did not require adjusting any parameter after the initial construction based solely on PDB structures and melting profiles of dimers, attests to the transferability and robustness of the TIS model for ssDNA and dsDNA.
2111.13785
Xiaoyu Zhang
Xiaoyu Zhang and Yike Guo
OmiTrans: generative adversarial networks based omics-to-omics translation framework
9 pages, 9 figures
BIBM 2022 Regular Paper
null
null
q-bio.GN cs.AI cs.LG q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the rapid development of high-throughput experimental technologies, different types of omics (e.g., genomics, epigenomics, transcriptomics, proteomics, and metabolomics) data can be produced from clinical samples. The correlations between different omics types attracts a lot of research interest, whereas the stduy on genome-wide omcis data translation (i.e, generation and prediction of one type of omics data from another type of omics data) is almost blank. Generative adversarial networks and the variants are one of the most state-of-the-art deep learning technologies, which have shown great success in image-to-image translation, text-to-image translation, etc. Here we proposed OmiTrans, a deep learning framework adopted the idea of generative adversarial networks to achieve omics-to-omics translation with promising results. OmiTrans was able to faithfully reconstruct gene expression profiles from DNA methylation data with high accuracy and great model generalisation, as demonstrated in the experiments.
[ { "created": "Sat, 27 Nov 2021 00:45:10 GMT", "version": "v1" } ]
2022-11-18
[ [ "Zhang", "Xiaoyu", "" ], [ "Guo", "Yike", "" ] ]
With the rapid development of high-throughput experimental technologies, different types of omics (e.g., genomics, epigenomics, transcriptomics, proteomics, and metabolomics) data can be produced from clinical samples. The correlations between different omics types attracts a lot of research interest, whereas the stduy on genome-wide omcis data translation (i.e, generation and prediction of one type of omics data from another type of omics data) is almost blank. Generative adversarial networks and the variants are one of the most state-of-the-art deep learning technologies, which have shown great success in image-to-image translation, text-to-image translation, etc. Here we proposed OmiTrans, a deep learning framework adopted the idea of generative adversarial networks to achieve omics-to-omics translation with promising results. OmiTrans was able to faithfully reconstruct gene expression profiles from DNA methylation data with high accuracy and great model generalisation, as demonstrated in the experiments.
2110.09642
Mohamed Mehdaoui
Mohamed Mehdaoui
A review of commonly used compartmental models in epidemiology
null
null
null
null
q-bio.PE math.DS
http://creativecommons.org/licenses/by/4.0/
In order to model an epidemic, different approaches can be adopted. Mainly, the deterministic approach and the stochastic one. Recently, a large amount of literature has been published using the two approaches. The aim of this paper is to illustrate the usual framework used for commonly adopted compartmental models in epidemiology and introduce variant analytic and numerical tools that interfere on each one of those models, as well as the general related types of existing, ongoing and future possible contributions.
[ { "created": "Mon, 18 Oct 2021 22:43:53 GMT", "version": "v1" }, { "created": "Wed, 20 Oct 2021 19:46:05 GMT", "version": "v2" }, { "created": "Thu, 7 Jul 2022 09:43:38 GMT", "version": "v3" }, { "created": "Thu, 26 Jan 2023 20:49:15 GMT", "version": "v4" } ]
2023-01-30
[ [ "Mehdaoui", "Mohamed", "" ] ]
In order to model an epidemic, different approaches can be adopted. Mainly, the deterministic approach and the stochastic one. Recently, a large amount of literature has been published using the two approaches. The aim of this paper is to illustrate the usual framework used for commonly adopted compartmental models in epidemiology and introduce variant analytic and numerical tools that interfere on each one of those models, as well as the general related types of existing, ongoing and future possible contributions.
2006.04566
Ad\'an Myers y Guti\'errez
Po-E Li, Ad\'an Myers y Guti\'errez, Karen Davenport, Mark Flynn, Bin Hu, Chien-Chi Lo, Elais Player Jackson, Migun Shakya, Yan Xu, Jason Gans, and Patrick S. G. Chain
A Public Website for the Automated Assessment and Validation of SARS-CoV-2 Diagnostic PCR Assays
Application Note. Main: 2 pages, 1 figure. Supplementary: 6 pages, 8 figures, 1 table. Total: 8 pages, 9 figures, 1 table. Application url: https://covid19.edgebioinformatics.org/#/assayValidation Contact: Jason Gans (jgans@lanl.gov) and Patrick Chain (pchain@lanl.gov) Submitted to: Bioinformatics
null
null
null
q-bio.GN q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Summary: Polymerase chain reaction-based assays are the current gold standard for detecting and diagnosing SARS-CoV-2. However, as SARS-CoV-2 mutates, we need to constantly assess whether existing PCR-based assays will continue to detect all known viral strains. To enable the continuous monitoring of SARS-CoV-2 assays, we have developed a web-based assay validation algorithm that checks existing PCR-based assays against the ever-expanding genome databases for SARS-CoV-2 using both thermodynamic and edit-distance metrics. The assay screening results are displayed as a heatmap, showing the number of mismatches between each detection and each SARS-CoV-2 genome sequence. Using a mismatch threshold to define detection failure, assay performance is summarized with the true positive rate (recall) to simplify assay comparisons. Availability: https://covid19.edgebioinformatics.org/#/assayValidation. Contact: Jason Gans (jgans@lanl.gov) and Patrick Chain (pchain@lanl.gov)
[ { "created": "Mon, 8 Jun 2020 13:17:07 GMT", "version": "v1" } ]
2020-06-09
[ [ "Li", "Po-E", "" ], [ "Gutiérrez", "Adán Myers y", "" ], [ "Davenport", "Karen", "" ], [ "Flynn", "Mark", "" ], [ "Hu", "Bin", "" ], [ "Lo", "Chien-Chi", "" ], [ "Jackson", "Elais Player", "" ], [ "...
Summary: Polymerase chain reaction-based assays are the current gold standard for detecting and diagnosing SARS-CoV-2. However, as SARS-CoV-2 mutates, we need to constantly assess whether existing PCR-based assays will continue to detect all known viral strains. To enable the continuous monitoring of SARS-CoV-2 assays, we have developed a web-based assay validation algorithm that checks existing PCR-based assays against the ever-expanding genome databases for SARS-CoV-2 using both thermodynamic and edit-distance metrics. The assay screening results are displayed as a heatmap, showing the number of mismatches between each detection and each SARS-CoV-2 genome sequence. Using a mismatch threshold to define detection failure, assay performance is summarized with the true positive rate (recall) to simplify assay comparisons. Availability: https://covid19.edgebioinformatics.org/#/assayValidation. Contact: Jason Gans (jgans@lanl.gov) and Patrick Chain (pchain@lanl.gov)
1407.0320
Shilpa Nadimpalli
Shilpa Nadimpalli, Anton V. Persikov and Mona Singh
Pervasive variation of transcription factor orthologs contributes to regulatory network evolution
29 pages, 5 figures, 5 supplemental figures, 3 supplemental tables
PLOS Genetics 11(3): e1005011. 2015
10.1371/journal.pgen.1005011
null
q-bio.GN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Differences in transcriptional regulatory networks underlie much of the phenotypic variation observed across organisms. Changes to cis-regulatory elements are widely believed to be the predominant means by which regulatory networks evolve, yet examples of regulatory network divergence due to transcription factor (TF) variation have also been observed. To systematically ascertain the extent to which TFs contribute to regulatory divergence, we analyzed the evolution of the largest class of metazoan TFs, Cys2-His2 zinc finger (C2H2-ZF) TFs, across 12 Drosophila species spanning ~45 million years of evolution. Remarkably, we uncovered that a significant fraction of all C2H2-ZF 1-to-1 orthologs in flies exhibit variations that can affect their DNA-binding specificities. In addition to loss and recruitment of C2H2-ZF domains, we found diverging DNA-contacting residues in ~47% of domains shared between D. melanogaster and the other fly species. These diverging DNA-contacting residues, found in ~66% of the D. melanogaster C2H2-ZF genes in our analysis and corresponding to ~24% of all annotated D. melanogaster TFs, show evidence of functional constraint: they tend to be conserved across phylogenetic clades and evolve slower than other diverging residues. These same variations were rarely found as polymorphisms within a population of D. melanogaster flies, indicating their rapid fixation. The predicted specificities of these dynamic domains gradually change across phylogenetic distances, suggesting stepwise evolutionary trajectories for TF divergence. Further, whereas proteins with conserved C2H2-ZF domains are enriched in developmental functions, those with varying domains exhibit no functional enrichments. Our work suggests that a subset of highly dynamic and largely unstudied TFs are a likely source of regulatory variation in Drosophila and other metazoans.
[ { "created": "Tue, 1 Jul 2014 16:59:57 GMT", "version": "v1" } ]
2017-04-28
[ [ "Nadimpalli", "Shilpa", "" ], [ "Persikov", "Anton V.", "" ], [ "Singh", "Mona", "" ] ]
Differences in transcriptional regulatory networks underlie much of the phenotypic variation observed across organisms. Changes to cis-regulatory elements are widely believed to be the predominant means by which regulatory networks evolve, yet examples of regulatory network divergence due to transcription factor (TF) variation have also been observed. To systematically ascertain the extent to which TFs contribute to regulatory divergence, we analyzed the evolution of the largest class of metazoan TFs, Cys2-His2 zinc finger (C2H2-ZF) TFs, across 12 Drosophila species spanning ~45 million years of evolution. Remarkably, we uncovered that a significant fraction of all C2H2-ZF 1-to-1 orthologs in flies exhibit variations that can affect their DNA-binding specificities. In addition to loss and recruitment of C2H2-ZF domains, we found diverging DNA-contacting residues in ~47% of domains shared between D. melanogaster and the other fly species. These diverging DNA-contacting residues, found in ~66% of the D. melanogaster C2H2-ZF genes in our analysis and corresponding to ~24% of all annotated D. melanogaster TFs, show evidence of functional constraint: they tend to be conserved across phylogenetic clades and evolve slower than other diverging residues. These same variations were rarely found as polymorphisms within a population of D. melanogaster flies, indicating their rapid fixation. The predicted specificities of these dynamic domains gradually change across phylogenetic distances, suggesting stepwise evolutionary trajectories for TF divergence. Further, whereas proteins with conserved C2H2-ZF domains are enriched in developmental functions, those with varying domains exhibit no functional enrichments. Our work suggests that a subset of highly dynamic and largely unstudied TFs are a likely source of regulatory variation in Drosophila and other metazoans.
1802.00502
Frank Stollmeier
Frank Stollmeier and Jan Nagler
Unfair and Anomalous Evolutionary Dynamics from Fluctuating Payoffs
6 pages, 8 pages supplement
Physical Review Letters 120, 058101 (2018)
10.1103/PhysRevLett.120.058101
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evolution occurs in populations of reproducing individuals. Reproduction depends on the payoff a strategy receives. The payoff depends on the environment that may change over time, on intrinsic uncertainties, and on other sources of randomness. These temporal variations in the payoffs can affect which traits evolve. Understanding evolutionary game dynamics that are affected by varying payoffs remains difficult. Here we study the impact of arbitrary amplitudes and covariances of temporally varying payoffs on the dynamics. The evolutionary dynamics may be "unfair", meaning that, on average, two coexisting strategies may persistently receive different payoffs. This mechanism can induce an anomalous coexistence of cooperators and defectors in the Prisoner's Dilemma, and an unexpected selection reversal in the Hawk-Dove game.
[ { "created": "Thu, 1 Feb 2018 21:56:47 GMT", "version": "v1" } ]
2018-02-05
[ [ "Stollmeier", "Frank", "" ], [ "Nagler", "Jan", "" ] ]
Evolution occurs in populations of reproducing individuals. Reproduction depends on the payoff a strategy receives. The payoff depends on the environment that may change over time, on intrinsic uncertainties, and on other sources of randomness. These temporal variations in the payoffs can affect which traits evolve. Understanding evolutionary game dynamics that are affected by varying payoffs remains difficult. Here we study the impact of arbitrary amplitudes and covariances of temporally varying payoffs on the dynamics. The evolutionary dynamics may be "unfair", meaning that, on average, two coexisting strategies may persistently receive different payoffs. This mechanism can induce an anomalous coexistence of cooperators and defectors in the Prisoner's Dilemma, and an unexpected selection reversal in the Hawk-Dove game.
2408.00770
Paul Linton
Paul Linton
Linton Stereo Illusion
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new illusion that challenges our understanding of stereo vision. The illusion consists of a small circle (at 40cm) in front of a large circle (at 50cm), with constant angular sizes throughout. We move the large circle forward by 10cm (to 40cm) and back again (to 50cm). What distance should we move the small circle forward and back, so the circles look like they are moving rigidly in depth together? Constant physical distance (10cm) or constant disparity (6.7cm)? Observers choose constant disparity. This leads us to four conclusions: First, perceived stereo depth appears to reflect retinal disparities, not 3D geometry. Second, doubling disparity appears to double perceived depth, suggesting that perceived stereo depth is proportional to disparity. Third, changes in vergence appear to have no effect on perceived depth. Fourth, stereo 'depth constancy' appears to be a cognitive (not perceptual) phenomenon, reflecting our experience of a world distorted in perceived stereo depth. Finally, when angular size is not held constant, the illusion is no longer noticeable. However, the perceived stereo depth remains the same in both conditions, suggesting that this looming cue only affects our judgment, but not our visual experience, of motion in depth.
[ { "created": "Mon, 15 Jul 2024 19:00:27 GMT", "version": "v1" } ]
2024-08-05
[ [ "Linton", "Paul", "" ] ]
We present a new illusion that challenges our understanding of stereo vision. The illusion consists of a small circle (at 40cm) in front of a large circle (at 50cm), with constant angular sizes throughout. We move the large circle forward by 10cm (to 40cm) and back again (to 50cm). What distance should we move the small circle forward and back, so the circles look like they are moving rigidly in depth together? Constant physical distance (10cm) or constant disparity (6.7cm)? Observers choose constant disparity. This leads us to four conclusions: First, perceived stereo depth appears to reflect retinal disparities, not 3D geometry. Second, doubling disparity appears to double perceived depth, suggesting that perceived stereo depth is proportional to disparity. Third, changes in vergence appear to have no effect on perceived depth. Fourth, stereo 'depth constancy' appears to be a cognitive (not perceptual) phenomenon, reflecting our experience of a world distorted in perceived stereo depth. Finally, when angular size is not held constant, the illusion is no longer noticeable. However, the perceived stereo depth remains the same in both conditions, suggesting that this looming cue only affects our judgment, but not our visual experience, of motion in depth.
2311.17067
Bradly Alicea
Bradly Alicea
Hypergraphs Demonstrate Anastomoses During Divergent Integration
21 pages, 8 figures
null
null
null
q-bio.QM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Complex networks can be used to analyze structures and systems in the embryo. Not only can we characterize growth and the emergence of form, but also differentiation. The process of differentiation from precursor cell populations to distinct functional tissues is of particular interest. These phenomena can be captured using a hypergraph consisting of nodes represented by cell type categories and arranged as a directed cyclic graph (lineage hypergraph) and a complex network (spatial hypergraph). The lineage hypergraph models the developmental process as an n-ary tree, which can model two or more descendent categories per division event. A lineage tree based on the mosaic development of the nematode C. elegans (2-ary tree), is used to capture this process. Each round of divisions produces a new set of categories that allow for exchange of cells between types. An example from single-cell morphogenesis based on the cyanobacterial species Nostoc punctiforme (multiple discontinuous 2-ary tree) is also used to demonstrate the flexibility of this method. This model allows for new structures to emerge (such as a connectome) while also demonstrating how precursor categories are maintained for purposes such as dedifferentiation or other forms of cell fate plasticity. To understand this process of divergent integration, we analyze the directed hypergraph and categorical models, in addition to considering the role of network fistulas (spaces that conjoin two functional modules) and spatial restriction.
[ { "created": "Fri, 24 Nov 2023 02:55:42 GMT", "version": "v1" } ]
2023-11-30
[ [ "Alicea", "Bradly", "" ] ]
Complex networks can be used to analyze structures and systems in the embryo. Not only can we characterize growth and the emergence of form, but also differentiation. The process of differentiation from precursor cell populations to distinct functional tissues is of particular interest. These phenomena can be captured using a hypergraph consisting of nodes represented by cell type categories and arranged as a directed cyclic graph (lineage hypergraph) and a complex network (spatial hypergraph). The lineage hypergraph models the developmental process as an n-ary tree, which can model two or more descendent categories per division event. A lineage tree based on the mosaic development of the nematode C. elegans (2-ary tree), is used to capture this process. Each round of divisions produces a new set of categories that allow for exchange of cells between types. An example from single-cell morphogenesis based on the cyanobacterial species Nostoc punctiforme (multiple discontinuous 2-ary tree) is also used to demonstrate the flexibility of this method. This model allows for new structures to emerge (such as a connectome) while also demonstrating how precursor categories are maintained for purposes such as dedifferentiation or other forms of cell fate plasticity. To understand this process of divergent integration, we analyze the directed hypergraph and categorical models, in addition to considering the role of network fistulas (spaces that conjoin two functional modules) and spatial restriction.
1905.06038
Alberto P\'erez-Cervera
Alberto P\'erez-Cervera, Tere M. Seara and Gemma Huguet
Phase-locked states in oscillating neural networks and their role in neural communication
null
null
10.1016/j.cnsns.2019.104992
null
q-bio.NC math.DS nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The theory of communication through coherence (CTC) proposes that brain oscillations reflect changes in the excitability of neurons, and therefore the successful communication between two oscillating neural populations depends not only on the strength of the signal emitted but also on the relative phases between them. More precisely, effective communication occurs when the emitting and receiving populations are properly phase locked so the inputs sent by the emitting population arrive at the phases of maximal excitability of the receiving population. To study this setting, we consider a population rate model consisting of excitatory and inhibitory cells modelling the receiving population, and we perturb it with a time-dependent periodic function modelling the input from the emitting population. We consider the stroboscopic map for this system and compute numerically the fixed and periodic points of this map and their bifurcations as the amplitude and the frequency of the perturbation are varied. From the bifurcation diagram, we identify the phase-locked states as well as different regions of bistability. We explore carefully the dynamics emphasizing its implications for the CTC theory. In particular, we study how the input gain depends on the timing between the input and the inhibitory action of the receiving population. Our results show that naturally an optimal phase locking for CTC emerges, and provide a mechanism by which the receiving population can implement selective communication. Moreover, the presence of bistable regions, suggests a mechanism by which different communication regimes between brain areas can be established without changing the structure of the network
[ { "created": "Wed, 15 May 2019 09:05:23 GMT", "version": "v1" }, { "created": "Tue, 23 Jul 2019 10:51:10 GMT", "version": "v2" }, { "created": "Wed, 11 Sep 2019 09:16:25 GMT", "version": "v3" } ]
2019-09-12
[ [ "Pérez-Cervera", "Alberto", "" ], [ "Seara", "Tere M.", "" ], [ "Huguet", "Gemma", "" ] ]
The theory of communication through coherence (CTC) proposes that brain oscillations reflect changes in the excitability of neurons, and therefore the successful communication between two oscillating neural populations depends not only on the strength of the signal emitted but also on the relative phases between them. More precisely, effective communication occurs when the emitting and receiving populations are properly phase locked so the inputs sent by the emitting population arrive at the phases of maximal excitability of the receiving population. To study this setting, we consider a population rate model consisting of excitatory and inhibitory cells modelling the receiving population, and we perturb it with a time-dependent periodic function modelling the input from the emitting population. We consider the stroboscopic map for this system and compute numerically the fixed and periodic points of this map and their bifurcations as the amplitude and the frequency of the perturbation are varied. From the bifurcation diagram, we identify the phase-locked states as well as different regions of bistability. We explore carefully the dynamics emphasizing its implications for the CTC theory. In particular, we study how the input gain depends on the timing between the input and the inhibitory action of the receiving population. Our results show that naturally an optimal phase locking for CTC emerges, and provide a mechanism by which the receiving population can implement selective communication. Moreover, the presence of bistable regions, suggests a mechanism by which different communication regimes between brain areas can be established without changing the structure of the network
1610.06886
Adrianna Loback
Adrianna R. Loback, Jason S. Prentice, Mark L. Ioffe, Michael J. Berry II
Noise-Robust Modes of the Retinal Population Code have the Geometry of "Ridges" and Correspond with Neuronal Communities
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An appealing new principle for neural population codes is that correlations among neurons organize neural activity patterns into a discrete set of clusters, which can each be viewed as a noise-robust population "codeword". Previous studies assumed that these codewords corresponded geometrically with local peaks in the probability landscape of neural population responses. Here, we analyze multiple datasets of the responses of ~150 retinal ganglion cells and show that local probability peaks are absent under broad, non-repeated stimulus ensembles, which are characteristic of natural behavior. However, we find that neural activity still forms noise-robust clusters in this regime, albeit clusters with a different geometry. We start by defining a soft local maximum, which is a local probability maximum when constrained to a fixed spike count. Next, we show that soft local maxima are robustly present, and can moreover be linked across different spike count levels in the probability landscape to form a "ridge". We found that these ridges are comprised of combinations of spiking and silence in the neural population such that all of the spiking neurons are members of the same neuronal community, a notion from network theory. We argue that a neuronal community shares many of the properties of Donald Hebb's classic cell assembly, and show that a simple, biologically plausible decoding algorithm can recognize the presence of a specific neuronal community.
[ { "created": "Fri, 21 Oct 2016 18:25:36 GMT", "version": "v1" }, { "created": "Sun, 12 Mar 2017 15:56:11 GMT", "version": "v2" } ]
2017-03-14
[ [ "Loback", "Adrianna R.", "" ], [ "Prentice", "Jason S.", "" ], [ "Ioffe", "Mark L.", "" ], [ "Berry", "Michael J.", "II" ] ]
An appealing new principle for neural population codes is that correlations among neurons organize neural activity patterns into a discrete set of clusters, which can each be viewed as a noise-robust population "codeword". Previous studies assumed that these codewords corresponded geometrically with local peaks in the probability landscape of neural population responses. Here, we analyze multiple datasets of the responses of ~150 retinal ganglion cells and show that local probability peaks are absent under broad, non-repeated stimulus ensembles, which are characteristic of natural behavior. However, we find that neural activity still forms noise-robust clusters in this regime, albeit clusters with a different geometry. We start by defining a soft local maximum, which is a local probability maximum when constrained to a fixed spike count. Next, we show that soft local maxima are robustly present, and can moreover be linked across different spike count levels in the probability landscape to form a "ridge". We found that these ridges are comprised of combinations of spiking and silence in the neural population such that all of the spiking neurons are members of the same neuronal community, a notion from network theory. We argue that a neuronal community shares many of the properties of Donald Hebb's classic cell assembly, and show that a simple, biologically plausible decoding algorithm can recognize the presence of a specific neuronal community.
0810.1024
Peter Hinow
Peter Hinow, Philip Gerlee, Lisa J. McCawley, Vito Quaranta, Madalina Ciobanu, Shizhen Wang, Jason M. Graham, Bruce P. Ayati, Jonathan Claridge, Kristin R. Swanson, Mary Loveless, Alexander R. A. Anderson
A Spatial Model of Tumor-Host Interaction: Application of Chemotherapy
revised version, 25 pages, 9 figures, minor misprints corrected
Math. Biosci. Eng. 6(3):521-545, 2009
null
null
q-bio.TO q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we consider chemotherapy in a spatial model of tumor growth. The model, which is of reaction-diffusion type, takes into account the complex interactions between the tumor and surrounding stromal cells by including densities of endothelial cells and the extra-cellular matrix. When no treatment is applied the model reproduces the typical dynamics of early tumor growth. The initially avascular tumor reaches a diffusion limited size of the order of millimeters and initiates angiogenesis through the release of vascular endothelial growth factor (VEGF) secreted by hypoxic cells in the core of the tumor. This stimulates endothelial cells to migrate towards the tumor and establishes a nutrient supply sufficient for sustained invasion. To this model we apply cytostatic treatment in the form of a VEGF-inhibitor, which reduces the proliferation and chemotaxis of endothelial cells. This treatment has the capability to reduce tumor mass, but more importantly, we were able to determine that inhibition of endothelial cell proliferation is the more important of the two cellular functions targeted by the drug. Further, we considered the application of a cytotoxic drug that targets proliferating tumor cells. The drug was treated as a diffusible substance entering the tissue from the blood vessels. Our results show that depending on the characteristics of the drug it can either reduce the tumor mass significantly or in fact accelerate the growth rate of the tumor. This result seems to be due to complicated interplay between the stromal and tumor cell types and highlights the importance of considering chemotherapy in a spatial context.
[ { "created": "Mon, 6 Oct 2008 17:38:30 GMT", "version": "v1" }, { "created": "Thu, 12 Feb 2009 16:36:36 GMT", "version": "v2" }, { "created": "Thu, 9 Apr 2009 16:30:23 GMT", "version": "v3" } ]
2010-03-10
[ [ "Hinow", "Peter", "" ], [ "Gerlee", "Philip", "" ], [ "McCawley", "Lisa J.", "" ], [ "Quaranta", "Vito", "" ], [ "Ciobanu", "Madalina", "" ], [ "Wang", "Shizhen", "" ], [ "Graham", "Jason M.", "" ], [ ...
In this paper we consider chemotherapy in a spatial model of tumor growth. The model, which is of reaction-diffusion type, takes into account the complex interactions between the tumor and surrounding stromal cells by including densities of endothelial cells and the extra-cellular matrix. When no treatment is applied the model reproduces the typical dynamics of early tumor growth. The initially avascular tumor reaches a diffusion limited size of the order of millimeters and initiates angiogenesis through the release of vascular endothelial growth factor (VEGF) secreted by hypoxic cells in the core of the tumor. This stimulates endothelial cells to migrate towards the tumor and establishes a nutrient supply sufficient for sustained invasion. To this model we apply cytostatic treatment in the form of a VEGF-inhibitor, which reduces the proliferation and chemotaxis of endothelial cells. This treatment has the capability to reduce tumor mass, but more importantly, we were able to determine that inhibition of endothelial cell proliferation is the more important of the two cellular functions targeted by the drug. Further, we considered the application of a cytotoxic drug that targets proliferating tumor cells. The drug was treated as a diffusible substance entering the tissue from the blood vessels. Our results show that depending on the characteristics of the drug it can either reduce the tumor mass significantly or in fact accelerate the growth rate of the tumor. This result seems to be due to complicated interplay between the stromal and tumor cell types and highlights the importance of considering chemotherapy in a spatial context.
1512.02826
Kazuhiro Takemoto
Kazuhiro Takemoto
Habitat variability does not generally promote metabolic network modularity in flies and mammals
21 pages, 4 figures
Biosystems 139, 46-54 (2016)
10.1016/j.biosystems.2015.12.004
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The evolution of species habitat range is an important topic over a wide range of research fields. In higher organisms, habitat range evolution is generally associated with genetic events such as gene duplication. However, the specific factors that determine habitat variability remain unclear at higher levels of biological organization (e.g., biochemical networks). One widely accepted hypothesis developed from both theoretical and empirical analyses is that habitat variability promotes network modularity; however, this relationship has not yet been directly tested in higher organisms. Therefore, I investigated the relationship between habitat variability and metabolic network modularity using compound and enzymatic networks in flies and mammals. Contrary to expectation, there was no clear positive correlation between habitat variability and network modularity. As an exception, the network modularity increased with habitat variability in the enzymatic networks of flies. However, the observed association was likely an artifact, and the frequency of gene duplication appears to be the main factor contributing to network modularity. These findings raise the question of whether or not there is a general mechanism for habitat range expansion at a higher level (i.e., above the gene scale). This study suggests that the currently widely accepted hypothesis for habitat variability should be reconsidered.
[ { "created": "Wed, 9 Dec 2015 12:03:50 GMT", "version": "v1" } ]
2016-01-11
[ [ "Takemoto", "Kazuhiro", "" ] ]
The evolution of species habitat range is an important topic over a wide range of research fields. In higher organisms, habitat range evolution is generally associated with genetic events such as gene duplication. However, the specific factors that determine habitat variability remain unclear at higher levels of biological organization (e.g., biochemical networks). One widely accepted hypothesis developed from both theoretical and empirical analyses is that habitat variability promotes network modularity; however, this relationship has not yet been directly tested in higher organisms. Therefore, I investigated the relationship between habitat variability and metabolic network modularity using compound and enzymatic networks in flies and mammals. Contrary to expectation, there was no clear positive correlation between habitat variability and network modularity. As an exception, the network modularity increased with habitat variability in the enzymatic networks of flies. However, the observed association was likely an artifact, and the frequency of gene duplication appears to be the main factor contributing to network modularity. These findings raise the question of whether or not there is a general mechanism for habitat range expansion at a higher level (i.e., above the gene scale). This study suggests that the currently widely accepted hypothesis for habitat variability should be reconsidered.
1304.0542
Joshua Vogelstein
David E. Carlson, Joshua T. Vogelstein, Qisong Wu, Wenzhao Lian, Mingyuan Zhou, Colin R. Stoetzner, Daryl Kipke, Douglas Weber, David B. Dunson, Lawrence Carin
Multichannel Electrophysiological Spike Sorting via Joint Dictionary Learning & Mixture Modeling
14 pages, 9 figures
null
null
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a construction for joint feature learning and clustering of multichannel extracellular electrophysiological data across multiple recording periods for action potential detection and discrimination ("spike sorting"). Our construction improves over the previous state-of-the art principally in four ways. First, via sharing information across channels, we can better distinguish between single-unit spikes and artifacts. Second, our proposed "focused mixture model" (FMM) elegantly deals with units appearing, disappearing, or reappearing over multiple recording days, an important consideration for any chronic experiment. Third, by jointly learning features and clusters, we improve performance over previous attempts that proceeded via a two-stage ("frequentist") learning process. Fourth, by directly modeling spike rate, we improve detection of sparsely spiking neurons. Moreover, our Bayesian construction seamlessly handles missing data. We present state-of-the-art performance without requiring manually tuning of many hyper-parameters on both a public dataset with partial ground truth and a new experimental dataset.
[ { "created": "Tue, 2 Apr 2013 06:31:17 GMT", "version": "v1" }, { "created": "Mon, 5 Aug 2013 02:50:15 GMT", "version": "v2" } ]
2013-08-06
[ [ "Carlson", "David E.", "" ], [ "Vogelstein", "Joshua T.", "" ], [ "Wu", "Qisong", "" ], [ "Lian", "Wenzhao", "" ], [ "Zhou", "Mingyuan", "" ], [ "Stoetzner", "Colin R.", "" ], [ "Kipke", "Daryl", "" ], ...
We propose a construction for joint feature learning and clustering of multichannel extracellular electrophysiological data across multiple recording periods for action potential detection and discrimination ("spike sorting"). Our construction improves over the previous state-of-the art principally in four ways. First, via sharing information across channels, we can better distinguish between single-unit spikes and artifacts. Second, our proposed "focused mixture model" (FMM) elegantly deals with units appearing, disappearing, or reappearing over multiple recording days, an important consideration for any chronic experiment. Third, by jointly learning features and clusters, we improve performance over previous attempts that proceeded via a two-stage ("frequentist") learning process. Fourth, by directly modeling spike rate, we improve detection of sparsely spiking neurons. Moreover, our Bayesian construction seamlessly handles missing data. We present state-of-the-art performance without requiring manually tuning of many hyper-parameters on both a public dataset with partial ground truth and a new experimental dataset.
2106.13148
Mark Leake
Xin Jin, Ji-Eun Lee, Charley Schaefer, Xinwei Luo, Adam J. M. Wollman, Alex L. Payne-Dwyer, Tian Tian, Xiaowei Zhang, Xiao Chen, Yingxing Li, Tom C. B. McLeish, Mark C. Leake, Fan Bai
Membraneless organelles formed by liquid-liquid phase separation increase bacterial fitness
null
Sci Adv. 2021 Oct 22;7(43):eabh2929
10.1126/sciadv.abh2929
null
q-bio.BM cond-mat.soft physics.bio-ph q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Liquid-liquid phase separation is emerging as a crucial phenomenon in several fundamental cell processes. A range of eukaryotic systems exhibit liquid condensates. However, their function in bacteria, which in general lack membrane-bound compartments, remains less clear. Here, we used high-resolution optical microscopy to observe single bacterial aggresomes, nanostructured intracellular assemblies of proteins, to undercover their role in cell stress. We find that proteins inside aggresomes are mobile and undergo dynamic turnover, consistent with a liquid state. Our observations are in quantitative agreement with phase-separated liquid droplet formation driven by interacting proteins under thermal equilibrium that nucleate following diffusive collisions in the cytoplasm. We have discovered aggresomes in multiple species of bacteria, and show that these emergent, metastable liquid-structured protein assemblies increase bacterial fitness by enabling cells to tolerate environmental stresses.
[ { "created": "Thu, 24 Jun 2021 16:24:15 GMT", "version": "v1" } ]
2023-01-25
[ [ "Jin", "Xin", "" ], [ "Lee", "Ji-Eun", "" ], [ "Schaefer", "Charley", "" ], [ "Luo", "Xinwei", "" ], [ "Wollman", "Adam J. M.", "" ], [ "Payne-Dwyer", "Alex L.", "" ], [ "Tian", "Tian", "" ], [ "Zha...
Liquid-liquid phase separation is emerging as a crucial phenomenon in several fundamental cell processes. A range of eukaryotic systems exhibit liquid condensates. However, their function in bacteria, which in general lack membrane-bound compartments, remains less clear. Here, we used high-resolution optical microscopy to observe single bacterial aggresomes, nanostructured intracellular assemblies of proteins, to undercover their role in cell stress. We find that proteins inside aggresomes are mobile and undergo dynamic turnover, consistent with a liquid state. Our observations are in quantitative agreement with phase-separated liquid droplet formation driven by interacting proteins under thermal equilibrium that nucleate following diffusive collisions in the cytoplasm. We have discovered aggresomes in multiple species of bacteria, and show that these emergent, metastable liquid-structured protein assemblies increase bacterial fitness by enabling cells to tolerate environmental stresses.
2007.02032
Chris Antonopoulos Dr
Ian Cooper, Argha Mondal, Chris G. Antonopoulos
Dynamic tracking with model-based forecasting for the spread of the COVID-19 pandemic
17 pages, 13 figures
null
10.1016/j.chaos.2020.110298
null
q-bio.PE nlin.CD physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a susceptible-infected-removed (SIR) model has been used to track the evolution of the spread of the COVID-19 virus in four countries of interest. In particular, the epidemic model, that depends on some basic characteristics, has been applied to model the time evolution of the disease in Italy, India, South Korea and Iran. The economic, social and health consequences of the spread of the virus have been cataclysmic. Hence, it is essential that available mathematical models can be developed and used for the comparison to be made between published data sets and model predictions. The predictions estimated from the SIR model here, can be used in both the qualitative and quantitative analysis of the spread. It gives an insight into the spread of the virus that the published data alone cannot do by updating them and the model on a daily basis. For example, it is possible to detect the early onset of a spike in infections or the development of a second wave using our modeling approach. We considered data from March to June, 2020, when different communities are severely affected. We demonstrate predictions depending on the model's parameters related to the spread of COVID-19 until September 2020. By comparing the published data and model results, we conclude that in this way, it may be possible to better reflect the success or failure of the adequate measures implemented by governments and individuals to mitigate and control the current pandemic.
[ { "created": "Sat, 4 Jul 2020 07:42:32 GMT", "version": "v1" } ]
2020-10-28
[ [ "Cooper", "Ian", "" ], [ "Mondal", "Argha", "" ], [ "Antonopoulos", "Chris G.", "" ] ]
In this paper, a susceptible-infected-removed (SIR) model has been used to track the evolution of the spread of the COVID-19 virus in four countries of interest. In particular, the epidemic model, that depends on some basic characteristics, has been applied to model the time evolution of the disease in Italy, India, South Korea and Iran. The economic, social and health consequences of the spread of the virus have been cataclysmic. Hence, it is essential that available mathematical models can be developed and used for the comparison to be made between published data sets and model predictions. The predictions estimated from the SIR model here, can be used in both the qualitative and quantitative analysis of the spread. It gives an insight into the spread of the virus that the published data alone cannot do by updating them and the model on a daily basis. For example, it is possible to detect the early onset of a spike in infections or the development of a second wave using our modeling approach. We considered data from March to June, 2020, when different communities are severely affected. We demonstrate predictions depending on the model's parameters related to the spread of COVID-19 until September 2020. By comparing the published data and model results, we conclude that in this way, it may be possible to better reflect the success or failure of the adequate measures implemented by governments and individuals to mitigate and control the current pandemic.
2305.04128
Zilong Wang
Zilong Wang, Thomas R. Shultz, Ardvan S. Nobandegani
A Computational Model of Children's Learning and Use of Probabilities Across Different Ages
10 figures, 2 tables, 8 pages. Abstract submitted to Cognitive Science Society 2023 Conference
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent empirical work has shown that human children are adept at learning and reasoning with probabilities. Here, we model a recent experiment investigating the development of school-age children's non-symbolic probability reasoning ability using the Neural Probability Learner and Sampler (NPLS) system. We demonstrate that NPLS can accurately simulate children's probability judgments at different ages, tasks and difficulty levels to discriminate two probabilistic choices through accurate probability learning and sampling. An extension of NPLS using a skewed heuristic distribution can also model children's tendency to wrongly select the outcome with more favorable items but less likely to draw the favorable ones when the probabilistic choices are similar. We discuss the roles of two model parameters that can be adjusted to simulate the probability matching versus probability maximization phenomena in children, and why frequency biases children's probabilistic judgments.
[ { "created": "Sat, 6 May 2023 20:13:47 GMT", "version": "v1" } ]
2023-05-09
[ [ "Wang", "Zilong", "" ], [ "Shultz", "Thomas R.", "" ], [ "Nobandegani", "Ardvan S.", "" ] ]
Recent empirical work has shown that human children are adept at learning and reasoning with probabilities. Here, we model a recent experiment investigating the development of school-age children's non-symbolic probability reasoning ability using the Neural Probability Learner and Sampler (NPLS) system. We demonstrate that NPLS can accurately simulate children's probability judgments at different ages, tasks and difficulty levels to discriminate two probabilistic choices through accurate probability learning and sampling. An extension of NPLS using a skewed heuristic distribution can also model children's tendency to wrongly select the outcome with more favorable items but less likely to draw the favorable ones when the probabilistic choices are similar. We discuss the roles of two model parameters that can be adjusted to simulate the probability matching versus probability maximization phenomena in children, and why frequency biases children's probabilistic judgments.
1301.4640
Francesc Rossell\'o
Gabriel Cardona, Arnau Mir, Francesc Rossello, Lucia Rotger, David Sanchez
Cophenetic metrics for phylogenetic trees, after Sokal and Rohlf
The "authors' cut" of a paper published in BMC Bioinformatics 14:3 (2013). 46 pages
BMC Bioinformatics 14:3 (2013)
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phylogenetic tree comparison metrics are an important tool in the study of evolution, and hence the definition of such metrics is an interesting problem in phylogenetics. In a paper in Taxon fifty years ago, Sokal and Rohlf proposed to measure quantitatively the difference between a pair of phylogenetic trees by first encoding them by means of their half-matrices of cophenetic values, and then comparing these matrices. This idea has been used several times since then to define dissimilarity measures between phylogenetic trees but, to our knowledge, no proper metric on weighted phylogenetic trees with nested taxa based on this idea has been formally defined and studied yet. Actually, the cophenetic values of pairs of different taxa alone are not enough to single out phylogenetic trees with weighted arcs or nested taxa. In this paper we define a family of cophenetic metrics that compare phylogenetic trees on a same set of taxa by encoding them by means of their vectors of cophenetic values of pairs of taxa and depths of single taxa, and then computing the $L^p$ norm of the difference of the corresponding vectors. Then, we study, either analytically or numerically, some of their basic properties: neighbors, diameter, distribution, and their rank correlation with each other and with other metrics.
[ { "created": "Sun, 20 Jan 2013 09:07:58 GMT", "version": "v1" } ]
2013-01-22
[ [ "Cardona", "Gabriel", "" ], [ "Mir", "Arnau", "" ], [ "Rossello", "Francesc", "" ], [ "Rotger", "Lucia", "" ], [ "Sanchez", "David", "" ] ]
Phylogenetic tree comparison metrics are an important tool in the study of evolution, and hence the definition of such metrics is an interesting problem in phylogenetics. In a paper in Taxon fifty years ago, Sokal and Rohlf proposed to measure quantitatively the difference between a pair of phylogenetic trees by first encoding them by means of their half-matrices of cophenetic values, and then comparing these matrices. This idea has been used several times since then to define dissimilarity measures between phylogenetic trees but, to our knowledge, no proper metric on weighted phylogenetic trees with nested taxa based on this idea has been formally defined and studied yet. Actually, the cophenetic values of pairs of different taxa alone are not enough to single out phylogenetic trees with weighted arcs or nested taxa. In this paper we define a family of cophenetic metrics that compare phylogenetic trees on a same set of taxa by encoding them by means of their vectors of cophenetic values of pairs of taxa and depths of single taxa, and then computing the $L^p$ norm of the difference of the corresponding vectors. Then, we study, either analytically or numerically, some of their basic properties: neighbors, diameter, distribution, and their rank correlation with each other and with other metrics.
1909.02949
Angelyn Lao
Honeylou F. Farinas, Eduardo R. Mendoza, and Angelyn R. Lao
Species subsets and embedded networks of S-systems
null
Farinas, H. F., Mendoza, E. R., & Lao, A. R. (2020). Structural Properties of an S-system Model of Mycobacterium tuberculosis Gene Regulation. Philippine Journal of Science, 149(3), 539-555
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Magombedze and Mulder (2013) studied the gene regulatory system of \textit{Mycobacterium Tuberculosis} (\textit{Mtb}) by partitioning this into three subsystems based on putative gene function and role in dormancy/latency development. Each subsystem, in the form of $S$-system, is represented by an embedded chemical reaction network (CRN), defined by a species subset and a reaction subset induced by the set of digraph vertices of the subsystem. Based on the network decomposition theory initiated by Feinberg in 1987, we have introduced the concept of incidence-independent and developed the theory of $\mathscr{C}$- and $\mathscr{C}^*$-decompositions including their structure theorems in terms of linkage classes. With the $S$-system CRN $\mathscr{N}$ of Magombedze and Mulder's \textit{Mtb} model, its reaction set partition induced decomposition of subnetworks that are not CRNs of $S$-system but constitute independent decomposition of $\mathscr{N}$. We have also constructed a new $S$-system CRN $\mathscr{N}^*$ for which the embedded networks are $\mathscr{C}^*$-decomposition. We have shown that subnetworks of $\mathscr{N}$ and the embedded networks (subnetworks of $\mathscr{N}^*$) are digraph homomorphisms. Lastly, we attempted to explore modularity in the context of CRN.
[ { "created": "Mon, 2 Sep 2019 04:23:48 GMT", "version": "v1" } ]
2021-10-27
[ [ "Farinas", "Honeylou F.", "" ], [ "Mendoza", "Eduardo R.", "" ], [ "Lao", "Angelyn R.", "" ] ]
Magombedze and Mulder (2013) studied the gene regulatory system of \textit{Mycobacterium Tuberculosis} (\textit{Mtb}) by partitioning this into three subsystems based on putative gene function and role in dormancy/latency development. Each subsystem, in the form of $S$-system, is represented by an embedded chemical reaction network (CRN), defined by a species subset and a reaction subset induced by the set of digraph vertices of the subsystem. Based on the network decomposition theory initiated by Feinberg in 1987, we have introduced the concept of incidence-independent and developed the theory of $\mathscr{C}$- and $\mathscr{C}^*$-decompositions including their structure theorems in terms of linkage classes. With the $S$-system CRN $\mathscr{N}$ of Magombedze and Mulder's \textit{Mtb} model, its reaction set partition induced decomposition of subnetworks that are not CRNs of $S$-system but constitute independent decomposition of $\mathscr{N}$. We have also constructed a new $S$-system CRN $\mathscr{N}^*$ for which the embedded networks are $\mathscr{C}^*$-decomposition. We have shown that subnetworks of $\mathscr{N}$ and the embedded networks (subnetworks of $\mathscr{N}^*$) are digraph homomorphisms. Lastly, we attempted to explore modularity in the context of CRN.
1002.4599
Jan Hasenauer
J. Hasenauer, S. Waldherr, M. Doszczak, P. Scheurich, and F. Allgower
Density-based modeling and identification of biochemical networks in cell populations
17 pages, 6 figures
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many biological processes heterogeneity within cell populations is an important issue. In this work we consider populations where the behavior of every single cell can be described by a system of ordinary differential equations. Heterogeneity among individual cells is accounted for by differences in parameter values and initial conditions. Hereby, parameter values and initial conditions are subject to a distribution function which is part of the model specification. Based on the single cell model and the considered parameter distribution, a partial differential equation model describing the distribution of cells in the state and in the output space is derived. For the estimation of the parameter distribution within the model, we consider experimental data as obtained from flow cytometric analysis. From these noise-corrupted data a density-based statistical data model is derived. Using this data model the parameter distribution within the cell population is computed using convex optimization techniques. To evaluate the proposed method, a model for the caspase activation cascade is considered. It is shown that for known noise properties the unknown parameter distributions in this model are well estimated by the proposed method.
[ { "created": "Wed, 24 Feb 2010 18:16:22 GMT", "version": "v1" } ]
2010-02-25
[ [ "Hasenauer", "J.", "" ], [ "Waldherr", "S.", "" ], [ "Doszczak", "M.", "" ], [ "Scheurich", "P.", "" ], [ "Allgower", "F.", "" ] ]
In many biological processes heterogeneity within cell populations is an important issue. In this work we consider populations where the behavior of every single cell can be described by a system of ordinary differential equations. Heterogeneity among individual cells is accounted for by differences in parameter values and initial conditions. Hereby, parameter values and initial conditions are subject to a distribution function which is part of the model specification. Based on the single cell model and the considered parameter distribution, a partial differential equation model describing the distribution of cells in the state and in the output space is derived. For the estimation of the parameter distribution within the model, we consider experimental data as obtained from flow cytometric analysis. From these noise-corrupted data a density-based statistical data model is derived. Using this data model the parameter distribution within the cell population is computed using convex optimization techniques. To evaluate the proposed method, a model for the caspase activation cascade is considered. It is shown that for known noise properties the unknown parameter distributions in this model are well estimated by the proposed method.
1309.0599
Suman Kumar Banik
Arnab Bandyopadhyay, Soumi Biswas, Alok Kumar Maity and Suman K Banik
Analysis of DevR regulated genes in Mycobacterium tuberculosis
Title changed in this submission. 33 pages, 16 figures, 2 tables
null
null
null
q-bio.MN physics.bio-ph q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The DevRS two component system of Mycobacterium tuberculosis is responsible for its dormancy in host and becomes operative under hypoxic condition. It is experimentally known that phosphorylated DevR controls the expression of several downstream genes in a complex manner. In the present work we propose a theoretical model to show role of binding sites in DevR mediated gene expression. Individual and collective role of binding sites in regulating DevR mediated gene expression has been shown via modeling. Objective of the present work is two fold. First, to describe qualitatively the temporal dynamics of wild type genes and their known mutants. Based on these results we propose that DevR controlled gene expression follows a specific pattern which is efficient in describing other DevR mediated gene expression. Second, to analyze behavior of the system from information theoretical point of view. Using the tools of information theory we have calculated molecular efficiency of the system and have shown that it is close to the maximum limit of isothermal efficiency.
[ { "created": "Tue, 3 Sep 2013 07:30:36 GMT", "version": "v1" }, { "created": "Wed, 29 Jan 2014 11:41:16 GMT", "version": "v2" } ]
2014-01-30
[ [ "Bandyopadhyay", "Arnab", "" ], [ "Biswas", "Soumi", "" ], [ "Maity", "Alok Kumar", "" ], [ "Banik", "Suman K", "" ] ]
The DevRS two component system of Mycobacterium tuberculosis is responsible for its dormancy in host and becomes operative under hypoxic condition. It is experimentally known that phosphorylated DevR controls the expression of several downstream genes in a complex manner. In the present work we propose a theoretical model to show role of binding sites in DevR mediated gene expression. Individual and collective role of binding sites in regulating DevR mediated gene expression has been shown via modeling. Objective of the present work is two fold. First, to describe qualitatively the temporal dynamics of wild type genes and their known mutants. Based on these results we propose that DevR controlled gene expression follows a specific pattern which is efficient in describing other DevR mediated gene expression. Second, to analyze behavior of the system from information theoretical point of view. Using the tools of information theory we have calculated molecular efficiency of the system and have shown that it is close to the maximum limit of isothermal efficiency.
1610.02258
Weiliang Chen
Weiliang Chen, Erik De Schutter
Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers
null
null
null
null
q-bio.QM cs.CE physics.comp-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of simulated models and morphologies have exceeded the capacity of any serial implementation. This led to development of parallel solutions that benefit from the boost in performance of modern large-scale supercomputers. In this paper, we describe an MPI-based, parallel Operator-Splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its usage in real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario a parallel simulation with 2000 processes achieves more than 3600 times of speedup relative to its serial SSA counterpart and more than 20 times of speedup relative to parallel simulation with 100 processes. While simulation performance is affected by unbalanced loading, a substantial speedup can still be observed without any special treatment.
[ { "created": "Fri, 7 Oct 2016 12:52:07 GMT", "version": "v1" } ]
2016-10-10
[ [ "Chen", "Weiliang", "" ], [ "De Schutter", "Erik", "" ] ]
Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of simulated models and morphologies have exceeded the capacity of any serial implementation. This led to development of parallel solutions that benefit from the boost in performance of modern large-scale supercomputers. In this paper, we describe an MPI-based, parallel Operator-Splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its usage in real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario a parallel simulation with 2000 processes achieves more than 3600 times of speedup relative to its serial SSA counterpart and more than 20 times of speedup relative to parallel simulation with 100 processes. While simulation performance is affected by unbalanced loading, a substantial speedup can still be observed without any special treatment.
1409.3899
Momiao Xiong
Junhai Jiang, Nan Lin, Shicheng Guo, Jinyun Chen and Momiao Xiong
Methods for Joint Imaging and RNA-seq Data Analysis
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Emerging integrative analysis of genomic and anatomical imaging data which has not been well developed, provides invaluable information for the holistic discovery of the genomic structure of disease and has the potential to open a new avenue for discovering novel disease susceptibility genes which cannot be identified if they are analyzed separately. A key issue to the success of imaging and genomic data analysis is how to reduce their dimensions. Most previous methods for imaging information extraction and RNA-seq data reduction do not explore imaging spatial information and often ignore gene expression variation at genomic positional level. To overcome these limitations, we extend functional principle component analysis from one dimension to two dimension (2DFPCA) for representing imaging data and develop a multiple functional linear model (MFLM) in which functional principal scores of images are taken as multiple quantitative traits and RNA-seq profile across a gene is taken as a function predictor for assessing the association of gene expression with images. The developed method has been applied to image and RNA-seq data of ovarian cancer and KIRC studies. We identified 24 and 84 genes whose expressions were associated with imaging variations in ovarian cancer and KIRC studies, respectively. Our results showed that many significantly associated genes with images were not differentially expressed, but revealed their morphological and metabolic functions. The results also demonstrated that the peaks of the estimated regression coefficient function in the MFLM often allowed the discovery of splicing sites and multiple isoform of gene expressions.
[ { "created": "Sat, 13 Sep 2014 02:05:29 GMT", "version": "v1" } ]
2014-09-16
[ [ "Jiang", "Junhai", "" ], [ "Lin", "Nan", "" ], [ "Guo", "Shicheng", "" ], [ "Chen", "Jinyun", "" ], [ "Xiong", "Momiao", "" ] ]
Emerging integrative analysis of genomic and anatomical imaging data which has not been well developed, provides invaluable information for the holistic discovery of the genomic structure of disease and has the potential to open a new avenue for discovering novel disease susceptibility genes which cannot be identified if they are analyzed separately. A key issue to the success of imaging and genomic data analysis is how to reduce their dimensions. Most previous methods for imaging information extraction and RNA-seq data reduction do not explore imaging spatial information and often ignore gene expression variation at genomic positional level. To overcome these limitations, we extend functional principle component analysis from one dimension to two dimension (2DFPCA) for representing imaging data and develop a multiple functional linear model (MFLM) in which functional principal scores of images are taken as multiple quantitative traits and RNA-seq profile across a gene is taken as a function predictor for assessing the association of gene expression with images. The developed method has been applied to image and RNA-seq data of ovarian cancer and KIRC studies. We identified 24 and 84 genes whose expressions were associated with imaging variations in ovarian cancer and KIRC studies, respectively. Our results showed that many significantly associated genes with images were not differentially expressed, but revealed their morphological and metabolic functions. The results also demonstrated that the peaks of the estimated regression coefficient function in the MFLM often allowed the discovery of splicing sites and multiple isoform of gene expressions.
2407.10700
Fatemeh Mohammadi
Sean Dewar, Georg Grasegger, Kaie Kubjas, Fatemeh Mohammadi, and Anthony Nixon
Single-cell 3D genome reconstruction in the haploid setting using rigidity theory
null
null
null
null
q-bio.GN math.CO math.MG math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article considers the problem of 3-dimensional genome reconstruction for single-cell data, and the uniqueness of such reconstructions in the setting of haploid organisms. We consider multiple graph models as representations of this problem, and use techniques from graph rigidity theory to determine identifiability. Biologically, our models come from Hi-C data, microscopy data, and combinations thereof. Mathematically, we use unit ball and sphere packing models, as well as models consisting of distance and inequality constraints. In each setting, we describe and/or derive new results on realisability and uniqueness. We then propose a 3D reconstruction method based on semidefinite programming and apply it to synthetic and real data sets using our models.
[ { "created": "Mon, 15 Jul 2024 13:16:04 GMT", "version": "v1" } ]
2024-07-16
[ [ "Dewar", "Sean", "" ], [ "Grasegger", "Georg", "" ], [ "Kubjas", "Kaie", "" ], [ "Mohammadi", "Fatemeh", "" ], [ "Nixon", "Anthony", "" ] ]
This article considers the problem of 3-dimensional genome reconstruction for single-cell data, and the uniqueness of such reconstructions in the setting of haploid organisms. We consider multiple graph models as representations of this problem, and use techniques from graph rigidity theory to determine identifiability. Biologically, our models come from Hi-C data, microscopy data, and combinations thereof. Mathematically, we use unit ball and sphere packing models, as well as models consisting of distance and inequality constraints. In each setting, we describe and/or derive new results on realisability and uniqueness. We then propose a 3D reconstruction method based on semidefinite programming and apply it to synthetic and real data sets using our models.
2406.10184
Martin Guillemaud
Martin Guillemaud, Louis Cousyn, Vincent Navarro and Mario Chavez
Hyperbolic embedding of brain networks as a tool for epileptic seizures forecasting
null
null
null
null
q-bio.NC physics.data-an
http://creativecommons.org/licenses/by/4.0/
The evidence indicates that intracranial EEG connectivity, as estimated from daily resting state recordings from epileptic patients, may be capable of identifying preictal states. In this study, we employed hyperbolic embedding of brain networks to capture non-trivial patterns that discriminate between connectivity networks from days with (preictal) and without (interictal) seizure. A statistical model was constructed by combining hyperbolic geometry and machine learning tools, which allowed for the estimation of the probability of an upcoming seizure. The results demonstrated that representing brain networks in a hyperbolic space enabled an accurate discrimination (85%) between interictal (no-seizure) and preictal (seizure within the next 24 hours) states. The proposed method also demonstrated excellent prediction performances, with an overall accuracy of 87% and an F1-score of 89% (mean Brier score and Brier skill score of 0.12 and 0.37, respectively). In conclusion, our findings indicate that representations of brain connectivity in a latent geometry space can reveal a daily and reliable signature of the upcoming seizure(s), thus providing a promising biomarker for seizure forecasting.
[ { "created": "Fri, 14 Jun 2024 17:09:23 GMT", "version": "v1" }, { "created": "Tue, 18 Jun 2024 20:47:08 GMT", "version": "v2" } ]
2024-06-21
[ [ "Guillemaud", "Martin", "" ], [ "Cousyn", "Louis", "" ], [ "Navarro", "Vincent", "" ], [ "Chavez", "Mario", "" ] ]
The evidence indicates that intracranial EEG connectivity, as estimated from daily resting state recordings from epileptic patients, may be capable of identifying preictal states. In this study, we employed hyperbolic embedding of brain networks to capture non-trivial patterns that discriminate between connectivity networks from days with (preictal) and without (interictal) seizure. A statistical model was constructed by combining hyperbolic geometry and machine learning tools, which allowed for the estimation of the probability of an upcoming seizure. The results demonstrated that representing brain networks in a hyperbolic space enabled an accurate discrimination (85%) between interictal (no-seizure) and preictal (seizure within the next 24 hours) states. The proposed method also demonstrated excellent prediction performances, with an overall accuracy of 87% and an F1-score of 89% (mean Brier score and Brier skill score of 0.12 and 0.37, respectively). In conclusion, our findings indicate that representations of brain connectivity in a latent geometry space can reveal a daily and reliable signature of the upcoming seizure(s), thus providing a promising biomarker for seizure forecasting.
1911.02364
Adam Safron
Adam Safron
Rapid Anxiety Reduction (RAR): A unified theory of humor
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Here I propose a novel theory in which humor is the feeling of Rapid Anxiety Reduction (RAR). According to RAR, humor can be expressed in a simple formula: -d(A)/dt. RAR has strong correspondences with False Alarm Theory, Benign Violation Theory, and Cognitive Debugging Theory, all of which represent either special cases or partial descriptions at alternative levels of analysis. Some evidence for RAR includes physiological similarities between hyperventilation and laughter and the fact that smiles often indicate negative affect in non-human primates (e.g. fear grimaces where teeth are exposed as a kind of inhibited threat display). In accordance with Benign Violation Theory, if humor reliably indicates both a) anxiety induction, b) anxiety reduction, and c) the time-course over which anxiety is reduced, then the intersection of these conditions productively constrains inference spaces over latent mental states with respect to the values and capacities of the persons experiencing humor. In this way, humor is a powerful cypher for understanding persons in both individual and social contexts, with far-reaching implications. Finally, if humor can be expressed in such a simple formula with clear ties to phenomenology, and yet this discovery regarding such an essential part of the human experience has remained undiscovered for this long, then this is an extremely surprising state of affairs worthy of further investigation. Towards this end, I propose an analogy can be found with consciousness studies, where in addition to the "Hard problem" of trying to explain humor, we would do well to consider a "Meta-Problem" of why humor seems so difficult to explain, and why relatively simple explanations may have eluded us for this long. (Please note: RAR was conceived in 2008, and last majorly updated in 2012.)
[ { "created": "Sat, 2 Nov 2019 21:56:22 GMT", "version": "v1" }, { "created": "Fri, 8 Nov 2019 03:51:16 GMT", "version": "v2" } ]
2019-11-11
[ [ "Safron", "Adam", "" ] ]
Here I propose a novel theory in which humor is the feeling of Rapid Anxiety Reduction (RAR). According to RAR, humor can be expressed in a simple formula: -d(A)/dt. RAR has strong correspondences with False Alarm Theory, Benign Violation Theory, and Cognitive Debugging Theory, all of which represent either special cases or partial descriptions at alternative levels of analysis. Some evidence for RAR includes physiological similarities between hyperventilation and laughter and the fact that smiles often indicate negative affect in non-human primates (e.g. fear grimaces where teeth are exposed as a kind of inhibited threat display). In accordance with Benign Violation Theory, if humor reliably indicates both a) anxiety induction, b) anxiety reduction, and c) the time-course over which anxiety is reduced, then the intersection of these conditions productively constrains inference spaces over latent mental states with respect to the values and capacities of the persons experiencing humor. In this way, humor is a powerful cypher for understanding persons in both individual and social contexts, with far-reaching implications. Finally, if humor can be expressed in such a simple formula with clear ties to phenomenology, and yet this discovery regarding such an essential part of the human experience has remained undiscovered for this long, then this is an extremely surprising state of affairs worthy of further investigation. Towards this end, I propose an analogy can be found with consciousness studies, where in addition to the "Hard problem" of trying to explain humor, we would do well to consider a "Meta-Problem" of why humor seems so difficult to explain, and why relatively simple explanations may have eluded us for this long. (Please note: RAR was conceived in 2008, and last majorly updated in 2012.)
2312.17480
Wensha Zhang
Wensha Zhang, Lam Si Tung Ho, Toby Kenney
Detection of evolutionary shifts in variance under an Ornsten-Uhlenbeck model
null
null
null
null
q-bio.PE stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
1. Abrupt environmental changes can lead to evolutionary shifts in not only mean (optimal value), but also variance of descendants in trait evolution. There are some methods to detect shifts in optimal value but few studies consider shifts in variance. 2. We use a multi-optima and multi-variance OU process model to describe the trait evolution process with shifts in both optimal value and variance and provide analysis of how the covariance between species changes when shifts in variance occur along the path. 3. We propose a new method to detect the shifts in both variance and optimal values based on minimizing the loss function with L1 penalty. We implement our method in a new R package, ShiVa (Detection of evolutionary shifts in variance). 4. We conduct simulations to compare our method with the two methods considering only shifts in optimal values (l1ou; PhylogeneticEM). Our method shows strength in predictive ability and includes far fewer false positive shifts in optimal value compared to other methods when shifts in variance actually exist. When there are only shifts in optimal value, our method performs similarly to other methods. We applied our method to the cordylid data, ShiVa outperformed l1ou and phyloEM, exhibiting the highest log-likelihood and lowest BIC.
[ { "created": "Fri, 29 Dec 2023 05:44:41 GMT", "version": "v1" } ]
2024-01-01
[ [ "Zhang", "Wensha", "" ], [ "Ho", "Lam Si Tung", "" ], [ "Kenney", "Toby", "" ] ]
1. Abrupt environmental changes can lead to evolutionary shifts in not only mean (optimal value), but also variance of descendants in trait evolution. There are some methods to detect shifts in optimal value but few studies consider shifts in variance. 2. We use a multi-optima and multi-variance OU process model to describe the trait evolution process with shifts in both optimal value and variance and provide analysis of how the covariance between species changes when shifts in variance occur along the path. 3. We propose a new method to detect the shifts in both variance and optimal values based on minimizing the loss function with L1 penalty. We implement our method in a new R package, ShiVa (Detection of evolutionary shifts in variance). 4. We conduct simulations to compare our method with the two methods considering only shifts in optimal values (l1ou; PhylogeneticEM). Our method shows strength in predictive ability and includes far fewer false positive shifts in optimal value compared to other methods when shifts in variance actually exist. When there are only shifts in optimal value, our method performs similarly to other methods. We applied our method to the cordylid data, ShiVa outperformed l1ou and phyloEM, exhibiting the highest log-likelihood and lowest BIC.
2110.01192
Deeptajyoti Sen Dr.
Deeptajyoti Sen and Sudeshna Sinha
Influence of Allee Effect on Extreme Events in Coupled Three Species Systems
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
We consider the dynamics of two coupled three-species population patches, incorporating the Allee Effect, focussing on the onset of extreme events in the coupled system. First we show that the interplay between coupling and the Allee effect may change the nature of the dynamics, with regular periodic dynamics becoming chaotic in a range of Allee parameters and coupling strengths. Further, the growth in the vegetation population displays an explosive blow-up beyond a critical value of coupling strength and Allee parameter. Most interestingly, we observe that beyond a threshold of coupling strength and Allee parameter, the population densities of all three species exhibit non-zero probability of yielding extreme events. The emergence of extreme events in the predator populations in the patches is the most prevalent, and the probability of obtaining large deviations in the predator populations is not affected significantly by either the coupling strength or the Allee effect. In the absence of the Allee effect the prey population in the coupled system exhibits no extreme events for low coupling strengths, but yields a sharp increase in extreme events after a critical strength of coupling. The vegetation population in the patches display a small finite probability of extreme events for strong enough coupling, only in the presence of Allee effect. Lastly we consider the influence of additive noise on the continued prevalence of extreme events. Very significantly, we find that noise suppresses the unbounded vegetation growth that was induced by a combination of Allee effect and coupling. Further, we demonstrate that noise mitigates extreme events in all three populations, and beyond a noise level we do not observe any extreme events in the system at all. This finding has important bearing on the potential observability of extreme events in natural and laboratory systems.
[ { "created": "Mon, 4 Oct 2021 05:13:02 GMT", "version": "v1" } ]
2021-10-05
[ [ "Sen", "Deeptajyoti", "" ], [ "Sinha", "Sudeshna", "" ] ]
We consider the dynamics of two coupled three-species population patches, incorporating the Allee Effect, focussing on the onset of extreme events in the coupled system. First we show that the interplay between coupling and the Allee effect may change the nature of the dynamics, with regular periodic dynamics becoming chaotic in a range of Allee parameters and coupling strengths. Further, the growth in the vegetation population displays an explosive blow-up beyond a critical value of coupling strength and Allee parameter. Most interestingly, we observe that beyond a threshold of coupling strength and Allee parameter, the population densities of all three species exhibit non-zero probability of yielding extreme events. The emergence of extreme events in the predator populations in the patches is the most prevalent, and the probability of obtaining large deviations in the predator populations is not affected significantly by either the coupling strength or the Allee effect. In the absence of the Allee effect the prey population in the coupled system exhibits no extreme events for low coupling strengths, but yields a sharp increase in extreme events after a critical strength of coupling. The vegetation population in the patches display a small finite probability of extreme events for strong enough coupling, only in the presence of Allee effect. Lastly we consider the influence of additive noise on the continued prevalence of extreme events. Very significantly, we find that noise suppresses the unbounded vegetation growth that was induced by a combination of Allee effect and coupling. Further, we demonstrate that noise mitigates extreme events in all three populations, and beyond a noise level we do not observe any extreme events in the system at all. This finding has important bearing on the potential observability of extreme events in natural and laboratory systems.
0904.3534
Thierry Rabilloud
Thierry Rabilloud (BBSI)
Solubilization of Proteins in 2DE: An Outline
null
Methods in molecular biology (Clifton, N.J.) 519 (2009) 19-30
10.1007/978-1-59745-281-6_2
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein solubilization for two-dimensional electrophoresis (2DE) has to break molecular interactions to separate the biological contents of the material of interest into isolated and intact polypeptides. This must be carried out in conditions compatible with the first dimension of 2DE, namely isoelectric focusing. In addition, the extraction process must enable easy removal of any nonprotein component interfering with the isoelectric focusing. The constraints brought in this process by the peculiar features of isoelectric focusing are discussed, as well as their consequences in terms of possible solutions and limits for the solubilization process.
[ { "created": "Wed, 22 Apr 2009 19:25:35 GMT", "version": "v1" } ]
2009-04-23
[ [ "Rabilloud", "Thierry", "", "BBSI" ] ]
Protein solubilization for two-dimensional electrophoresis (2DE) has to break molecular interactions to separate the biological contents of the material of interest into isolated and intact polypeptides. This must be carried out in conditions compatible with the first dimension of 2DE, namely isoelectric focusing. In addition, the extraction process must enable easy removal of any nonprotein component interfering with the isoelectric focusing. The constraints brought in this process by the peculiar features of isoelectric focusing are discussed, as well as their consequences in terms of possible solutions and limits for the solubilization process.
1712.01931
Moumita Bhattacharya
M. Bhattacharya, C. Jurkovitz and H. Shatkay
Assessing Chronic Kidney Disease from Office Visit Records Using Hierarchical Meta-Classification of an Imbalanced Dataset
8 pages, 5 figures, 4 tables
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chronic Kidney Disease (CKD) is an increasingly prevalent condition affecting 13% of the US population. The disease is often a silent condition, making its diagnosis challenging. Identifying CKD stages from standard office visit records can help in early detection of the disease and lead to timely intervention. The dataset we use is highly imbalanced. We propose a hierarchical meta-classification method, aiming to stratify CKD by severity levels, employing simple quantitative non-text features gathered from office visit records, while addressing data imbalance. Our method effectively stratifies CKD severity levels obtaining high average sensitivity, precision and F-measure (~93%). We also conduct experiments in which the dimensionality of the data is significantly reduced to include only the most salient features. Our results show that the good performance of our system is retained even when using the reduced feature sets, as well as under much reduced training sets, indicating that our method is stable and generalizable.
[ { "created": "Fri, 17 Nov 2017 21:12:24 GMT", "version": "v1" } ]
2017-12-07
[ [ "Bhattacharya", "M.", "" ], [ "Jurkovitz", "C.", "" ], [ "Shatkay", "H.", "" ] ]
Chronic Kidney Disease (CKD) is an increasingly prevalent condition affecting 13% of the US population. The disease is often a silent condition, making its diagnosis challenging. Identifying CKD stages from standard office visit records can help in early detection of the disease and lead to timely intervention. The dataset we use is highly imbalanced. We propose a hierarchical meta-classification method, aiming to stratify CKD by severity levels, employing simple quantitative non-text features gathered from office visit records, while addressing data imbalance. Our method effectively stratifies CKD severity levels obtaining high average sensitivity, precision and F-measure (~93%). We also conduct experiments in which the dimensionality of the data is significantly reduced to include only the most salient features. Our results show that the good performance of our system is retained even when using the reduced feature sets, as well as under much reduced training sets, indicating that our method is stable and generalizable.
1205.3337
Nuno Crokidakis
Nuno Crokidakis, Silvio M. Duarte Queiros
Probing into the effectiveness of self-isolation policies in epidemic control
15 pages, 6 figures, to appear in JSTAT
J. Stat. Mech. P06003 (2012)
10.1088/1742-5468/2012/06/P06003
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we inspect the reliability of controlling and quelling an epidemic disease mimicked by a Susceptible-Infected-Susceptible (SIS) model defined on a complex network by means of current and implementable quarantine and isolation policies. Specifically, we consider that each individual in the network is originally linked to two types of individuals: members of the same household and acquaintances. The topology of this network evolves taking into account a probability $q$ that aims at representing the quarantine or isolation process in which the connection with acquaintances is disrupted according to standard policies of control of epidemics. Within current policies of self-isolation and standard infection rates, our results show that the propagation is either only controllable for hypothetical rates of compliance or uncontrollable at all.
[ { "created": "Tue, 15 May 2012 12:08:15 GMT", "version": "v1" } ]
2012-06-12
[ [ "Crokidakis", "Nuno", "" ], [ "Queiros", "Silvio M. Duarte", "" ] ]
In this work, we inspect the reliability of controlling and quelling an epidemic disease mimicked by a Susceptible-Infected-Susceptible (SIS) model defined on a complex network by means of current and implementable quarantine and isolation policies. Specifically, we consider that each individual in the network is originally linked to two types of individuals: members of the same household and acquaintances. The topology of this network evolves taking into account a probability $q$ that aims at representing the quarantine or isolation process in which the connection with acquaintances is disrupted according to standard policies of control of epidemics. Within current policies of self-isolation and standard infection rates, our results show that the propagation is either only controllable for hypothetical rates of compliance or uncontrollable at all.
1501.00682
Jonathan Mason
Jonathan Mason
Quasi-Conscious Multivariate Systems
33 pages (double spacing), 11 figures, 15 Tables
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conscious experience is awash with underlying relationships. Moreover, for various brain regions such as the visual cortex, the system is biased toward some states. Representing this bias using a probability distribution shows that the system can define expected quantities. The mathematical theory in the present paper links these facts by using expected float entropy (efe), which is a measure of the expected amount of information needed, to specify the state of the system, beyond what is already known about the system from relationships that appear as parameters. Under the requirement that the relationship parameters minimise efe, the brain defines relationships. It is proposed that when a brain state is interpreted in the context of these relationships the brain state acquires meaning in the form of the relational content of the associated experience. For a given set, the theory represents relationships using weighted relations which assign continuous weights, from 0 to 1, to the elements of the Cartesian product of that set. The relationship parameters include weighted relations on the nodes of the system and on their set of states. Examples obtained using Monte-Carlo methods (where relationship parameters are chosen uniformly at random) suggest that efe distributions with long left tails are most important.
[ { "created": "Sun, 4 Jan 2015 15:05:53 GMT", "version": "v1" }, { "created": "Thu, 18 Jun 2015 21:29:54 GMT", "version": "v2" }, { "created": "Sun, 9 Aug 2015 22:10:19 GMT", "version": "v3" } ]
2015-08-11
[ [ "Mason", "Jonathan", "" ] ]
Conscious experience is awash with underlying relationships. Moreover, for various brain regions such as the visual cortex, the system is biased toward some states. Representing this bias using a probability distribution shows that the system can define expected quantities. The mathematical theory in the present paper links these facts by using expected float entropy (efe), which is a measure of the expected amount of information needed, to specify the state of the system, beyond what is already known about the system from relationships that appear as parameters. Under the requirement that the relationship parameters minimise efe, the brain defines relationships. It is proposed that when a brain state is interpreted in the context of these relationships the brain state acquires meaning in the form of the relational content of the associated experience. For a given set, the theory represents relationships using weighted relations which assign continuous weights, from 0 to 1, to the elements of the Cartesian product of that set. The relationship parameters include weighted relations on the nodes of the system and on their set of states. Examples obtained using Monte-Carlo methods (where relationship parameters are chosen uniformly at random) suggest that efe distributions with long left tails are most important.
1201.5344
Martin Depken
Martin Depken, Juan M. R. Parrondo, Stephan W. Gril
Irregular transcription dynamics for rapid production of high-fidelity transcripts
10 pages, 6 figures
null
null
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Both genomic stability and sustenance of day-to-day life rely on efficient and accurate readout of the genetic code. Single-molecule experiments show that transcription and replication are highly stochastic and irregular processes, with the polymerases frequently pausing and even reversing direction. While such behavior is recognized as stemming from a sophisticated proofreading mechanism during replication, the origin and functional significance of irregular transcription dynamics remain controversial. Here, we theoretically examine the implications of RNA polymerase backtracking and transcript cleavage on transcription rates and fidelity. We illustrate how an extended state space for backtracking provides entropic fidelity enhancements that, together with additional fidelity checkpoints, can account for physiological error rates. To explore the competing demands of transcription fidelity, nucleotide triphosphate (NTP) consumption and transcription speed in a physiologically relevant setting, we establish an analytically framework for evaluating transcriptional performance at the level of extended sequences. Using this framework, we reveal a mechanism by which moderately irregular transcription results in astronomical gains in the rate at which extended high-fidelity transcripts can be produced under physiological conditions.
[ { "created": "Wed, 25 Jan 2012 19:03:33 GMT", "version": "v1" } ]
2012-01-26
[ [ "Depken", "Martin", "" ], [ "Parrondo", "Juan M. R.", "" ], [ "Gril", "Stephan W.", "" ] ]
Both genomic stability and sustenance of day-to-day life rely on efficient and accurate readout of the genetic code. Single-molecule experiments show that transcription and replication are highly stochastic and irregular processes, with the polymerases frequently pausing and even reversing direction. While such behavior is recognized as stemming from a sophisticated proofreading mechanism during replication, the origin and functional significance of irregular transcription dynamics remain controversial. Here, we theoretically examine the implications of RNA polymerase backtracking and transcript cleavage on transcription rates and fidelity. We illustrate how an extended state space for backtracking provides entropic fidelity enhancements that, together with additional fidelity checkpoints, can account for physiological error rates. To explore the competing demands of transcription fidelity, nucleotide triphosphate (NTP) consumption and transcription speed in a physiologically relevant setting, we establish an analytically framework for evaluating transcriptional performance at the level of extended sequences. Using this framework, we reveal a mechanism by which moderately irregular transcription results in astronomical gains in the rate at which extended high-fidelity transcripts can be produced under physiological conditions.
1805.07316
Konstantin Blyuss
F. Fatehi Chenar, Y.N. Kyrychko, K.B. Blyuss
Effects of viral and cytokine delays on dynamics of autoimmunity
25 pages, 8 figures
Mathematics 6, 66 (2018)
10.3390/math6050066
null
q-bio.QM nlin.CD q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A major contribution to the onset and development of autoimmune disease is known to come from infections. An important practical problem is identifying the precise mechanism by which the breakdown of immune tolerance as a result of immune response to infection leads to autoimmunity. In this paper, we develop a mathematical model of immune response to a viral infection, which includes T cells with different activation thresholds, regulatory T cells (Tregs), and~a cytokine mediating immune dynamics. Particular emphasis is made on the role of time delays associated with the processes of infection and mounting the immune response. Stability analysis of various steady states of the model allows us to identify parameter regions associated with different types of immune behaviour, such as, normal clearance of infection, chronic infection, and autoimmune dynamics. Numerical simulations are used to illustrate different dynamical regimes, and to identify basins of attraction of different dynamical states. An important result of the analysis is that not only the parameters of the system, but also the initial level of infection and the initial state of the immune system determine the progress and outcome of the dynamics.
[ { "created": "Fri, 18 May 2018 16:30:33 GMT", "version": "v1" } ]
2018-05-21
[ [ "Chenar", "F. Fatehi", "" ], [ "Kyrychko", "Y. N.", "" ], [ "Blyuss", "K. B.", "" ] ]
A major contribution to the onset and development of autoimmune disease is known to come from infections. An important practical problem is identifying the precise mechanism by which the breakdown of immune tolerance as a result of immune response to infection leads to autoimmunity. In this paper, we develop a mathematical model of immune response to a viral infection, which includes T cells with different activation thresholds, regulatory T cells (Tregs), and~a cytokine mediating immune dynamics. Particular emphasis is made on the role of time delays associated with the processes of infection and mounting the immune response. Stability analysis of various steady states of the model allows us to identify parameter regions associated with different types of immune behaviour, such as, normal clearance of infection, chronic infection, and autoimmune dynamics. Numerical simulations are used to illustrate different dynamical regimes, and to identify basins of attraction of different dynamical states. An important result of the analysis is that not only the parameters of the system, but also the initial level of infection and the initial state of the immune system determine the progress and outcome of the dynamics.
1808.01971
Ji\v{r}\'i Jan\'a\v{c}ek
Ji\v{r}\'i Jan\'a\v{c}ek and Daniel Jir\'ak
Volume tensor of pheasant brain compartments estimated by Fakir probe
The paper was submitted to Image Analysis & Stereology
Image Analysis and Stereology 38 (3), 2019, 255-260 and 261-267
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The volume tensor provides robust estimate of object shape and orientation in space. The tensor is estimated from 3D data set by the Fakir probe, an interactive method using intersections of the objects boundary with a virtual lines. The method thus can be applied to objects that cannot be segmented automatically. Marking the intersections instead of segmenting the whole object reduces the workload required for obtaining sufficiently precise results. We present theoretical results on the variance of estimate of integrals by systematic sampling that enable calculation of the shape estimate precision. To demonstrate the ability of Fakir technique, we measure the changes in shape and orientation of pheasant brain compartments during development.
[ { "created": "Fri, 3 Aug 2018 08:18:13 GMT", "version": "v1" } ]
2019-12-18
[ [ "Janáček", "Jiří", "" ], [ "Jirák", "Daniel", "" ] ]
The volume tensor provides robust estimate of object shape and orientation in space. The tensor is estimated from 3D data set by the Fakir probe, an interactive method using intersections of the objects boundary with a virtual lines. The method thus can be applied to objects that cannot be segmented automatically. Marking the intersections instead of segmenting the whole object reduces the workload required for obtaining sufficiently precise results. We present theoretical results on the variance of estimate of integrals by systematic sampling that enable calculation of the shape estimate precision. To demonstrate the ability of Fakir technique, we measure the changes in shape and orientation of pheasant brain compartments during development.
0807.4279
Tijana Ivancevic
Tijana T. Ivancevic, Lakhmi C. Jain, John Pattison and Alex Hariz
Preterm Birth Analysis Using Nonlinear Methods (a preliminary study)
21 pages, 5 figures, Latex
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this report we review modern nonlinearity methods that can be used in the preterm birth analysis. The nonlinear analysis of uterine contraction signals can provide information regarding physiological changes during the menstrual cycle and pregnancy. This information can be used both for the preterm birth prediction and the preterm labor control. Keywords: preterm birth, complex data analysis, nonlinear methods
[ { "created": "Sun, 27 Jul 2008 07:43:14 GMT", "version": "v1" } ]
2008-07-29
[ [ "Ivancevic", "Tijana T.", "" ], [ "Jain", "Lakhmi C.", "" ], [ "Pattison", "John", "" ], [ "Hariz", "Alex", "" ] ]
In this report we review modern nonlinearity methods that can be used in the preterm birth analysis. The nonlinear analysis of uterine contraction signals can provide information regarding physiological changes during the menstrual cycle and pregnancy. This information can be used both for the preterm birth prediction and the preterm labor control. Keywords: preterm birth, complex data analysis, nonlinear methods
1910.07440
Thomas Booth
Thomas Booth, Matthew Williams, Aysha Luis, Jorge Cardoso, Ashkan Keyoumars, Haris Shuaib
Machine learning and glioma imaging biomarkers
null
null
10.1016/j.crad.2019.07.001
null
q-bio.QM cs.LG eess.IV stat.ML
http://creativecommons.org/licenses/by/4.0/
Aim: To review how machine learning (ML) is applied to imaging biomarkers in neuro-oncology, in particular for diagnosis, prognosis, and treatment response monitoring. Materials and Methods: The PubMed and MEDLINE databases were searched for articles published before September 2018 using relevant search terms. The search strategy focused on articles applying ML to high-grade glioma biomarkers for treatment response monitoring, prognosis, and prediction. Results: Magnetic resonance imaging (MRI) is typically used throughout the patient pathway because routine structural imaging provides detailed anatomical and pathological information and advanced techniques provide additional physiological detail. Using carefully chosen image features, ML is frequently used to allow accurate classification in a variety of scenarios. Rather than being chosen by human selection, ML also enables image features to be identified by an algorithm. Much research is applied to determining molecular profiles, histological tumour grade, and prognosis using MRI images acquired at the time that patients first present with a brain tumour. Differentiating a treatment response from a post-treatment-related effect using imaging is clinically important and also an area of active study (described here in one of two Special Issue publications dedicated to the application of ML in glioma imaging). Conclusion: Although pioneering, most of the evidence is of a low level, having been obtained retrospectively and in single centres. Studies applying ML to build neuro-oncology monitoring biomarker models have yet to show an overall advantage over those using traditional statistical methods. Development and validation of ML models applied to neuro-oncology require large, well-annotated datasets, and therefore multidisciplinary and multi-centre collaborations are necessary.
[ { "created": "Wed, 28 Aug 2019 12:44:30 GMT", "version": "v1" } ]
2019-10-17
[ [ "Booth", "Thomas", "" ], [ "Williams", "Matthew", "" ], [ "Luis", "Aysha", "" ], [ "Cardoso", "Jorge", "" ], [ "Keyoumars", "Ashkan", "" ], [ "Shuaib", "Haris", "" ] ]
Aim: To review how machine learning (ML) is applied to imaging biomarkers in neuro-oncology, in particular for diagnosis, prognosis, and treatment response monitoring. Materials and Methods: The PubMed and MEDLINE databases were searched for articles published before September 2018 using relevant search terms. The search strategy focused on articles applying ML to high-grade glioma biomarkers for treatment response monitoring, prognosis, and prediction. Results: Magnetic resonance imaging (MRI) is typically used throughout the patient pathway because routine structural imaging provides detailed anatomical and pathological information and advanced techniques provide additional physiological detail. Using carefully chosen image features, ML is frequently used to allow accurate classification in a variety of scenarios. Rather than being chosen by human selection, ML also enables image features to be identified by an algorithm. Much research is applied to determining molecular profiles, histological tumour grade, and prognosis using MRI images acquired at the time that patients first present with a brain tumour. Differentiating a treatment response from a post-treatment-related effect using imaging is clinically important and also an area of active study (described here in one of two Special Issue publications dedicated to the application of ML in glioma imaging). Conclusion: Although pioneering, most of the evidence is of a low level, having been obtained retrospectively and in single centres. Studies applying ML to build neuro-oncology monitoring biomarker models have yet to show an overall advantage over those using traditional statistical methods. Development and validation of ML models applied to neuro-oncology require large, well-annotated datasets, and therefore multidisciplinary and multi-centre collaborations are necessary.
1804.03430
Tsvi Tlusty
Tsvi Tlusty
The self-referring DNA and protein: a remark on physical and geometrical aspects
17 pages, 9 figures; title updated and typos corrected
Philos Trans A 2016 Mar 13; 374(2063). pii: 20150070
10.1098/rsta.2015.0070
null
q-bio.OT physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
All known life forms are based upon a hierarchy of interwoven feedback loops, operating over a cascade of space, time and energy scales. Among the most basic loops are those connecting DNA and proteins. For example, in genetic networks, DNA genes are expressed as proteins, which may bind near the same genes and thereby control their own expression. In this molecular type of self-reference, information is mapped from the DNA sequence to the protein and back to DNA. There is a variety of dynamic DNA-protein self-reference loops, and the purpose of this remark is to discuss certain geometrical and physical aspects related to the back and forth mapping between DNA and proteins. The discussion raises basic questions regarding the nature of DNA and proteins as self-referring matter, which are examined in a simple toy model.
[ { "created": "Tue, 10 Apr 2018 10:12:58 GMT", "version": "v1" }, { "created": "Thu, 12 Apr 2018 07:11:34 GMT", "version": "v2" } ]
2018-04-13
[ [ "Tlusty", "Tsvi", "" ] ]
All known life forms are based upon a hierarchy of interwoven feedback loops, operating over a cascade of space, time and energy scales. Among the most basic loops are those connecting DNA and proteins. For example, in genetic networks, DNA genes are expressed as proteins, which may bind near the same genes and thereby control their own expression. In this molecular type of self-reference, information is mapped from the DNA sequence to the protein and back to DNA. There is a variety of dynamic DNA-protein self-reference loops, and the purpose of this remark is to discuss certain geometrical and physical aspects related to the back and forth mapping between DNA and proteins. The discussion raises basic questions regarding the nature of DNA and proteins as self-referring matter, which are examined in a simple toy model.
q-bio/0703064
Brigitte Gaillard
S. Bourgeon (DEPE-Iphc), T. Raclot (DEPE-Iphc)
Triiodothyronine suppresses humoral immunity but not T-cell-mediated immune response in incubating female eiders (Somateria mollissima)
null
Gen. Comp. Endocrinol. 151 (2007) 188-194
10.1016/j.ygcen.2007.01.020
null
q-bio.PE
null
Immunity is believed to share limited resources with other physiological functions and this may partly account for the fitness costs of reproduction. Previous studies have shown that the acquired immunity of female common eiders (Somateria mollissima) is suppressed during the incubation fast. To save energy, triiodothyronine (T3) is adaptively decreased during fasting in most bird species, despite T3 levels are maintained throughout incubation in female eiders. However, the relationship between thyroid hormones and the immune system is not fully understood. The current study aimed to determine the endocrine mechanisms that underlie immunosuppression in incubating female eiders. ...
[ { "created": "Thu, 29 Mar 2007 12:20:26 GMT", "version": "v1" } ]
2009-07-24
[ [ "Bourgeon", "S.", "", "DEPE-Iphc" ], [ "Raclot", "T.", "", "DEPE-Iphc" ] ]
Immunity is believed to share limited resources with other physiological functions and this may partly account for the fitness costs of reproduction. Previous studies have shown that the acquired immunity of female common eiders (Somateria mollissima) is suppressed during the incubation fast. To save energy, triiodothyronine (T3) is adaptively decreased during fasting in most bird species, despite T3 levels are maintained throughout incubation in female eiders. However, the relationship between thyroid hormones and the immune system is not fully understood. The current study aimed to determine the endocrine mechanisms that underlie immunosuppression in incubating female eiders. ...
2003.05647
Mitsuo Kawato
Mitsuo Kawato, Shogo Ohmae, Huu Hoang, Terry Sanger
50 years since the Marr, Ito, and Albus models of the cerebellum
P.4, 5, 8, 10, 14, 16, 18, 22, and some references added
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fifty years have passed since David Marr, Masao Ito, and James Albus proposed seminal models of cerebellar functions. These models share the essential concept that parallel-fiber-Purkinje-cell synapses undergo plastic changes, guided by climbing-fiber activities during sensorimotor learning. However, they differ in several important respects, including holistic versus complementary roles of the cerebellum, pattern recognition versus control as computational objectives, potentiation versus depression of synaptic plasticity, teaching signals versus error signals transmitted by climbing-fibers, sparse expansion coding by granule cells, and cerebellar internal models. In this review, we evaluate the different features of the three models based on recent computational and experimental studies. While acknowledging that the three models have greatly advanced our understanding of cerebellar control mechanisms in eye movements and classical conditioning, we propose a new direction for computational frameworks of the cerebellum. That is, hierarchical reinforcement learning with multiple internal models.
[ { "created": "Thu, 12 Mar 2020 07:38:30 GMT", "version": "v1" }, { "created": "Tue, 24 Mar 2020 09:14:37 GMT", "version": "v2" }, { "created": "Wed, 25 Mar 2020 00:23:31 GMT", "version": "v3" }, { "created": "Mon, 15 Jun 2020 08:21:36 GMT", "version": "v4" } ]
2020-06-16
[ [ "Kawato", "Mitsuo", "" ], [ "Ohmae", "Shogo", "" ], [ "Hoang", "Huu", "" ], [ "Sanger", "Terry", "" ] ]
Fifty years have passed since David Marr, Masao Ito, and James Albus proposed seminal models of cerebellar functions. These models share the essential concept that parallel-fiber-Purkinje-cell synapses undergo plastic changes, guided by climbing-fiber activities during sensorimotor learning. However, they differ in several important respects, including holistic versus complementary roles of the cerebellum, pattern recognition versus control as computational objectives, potentiation versus depression of synaptic plasticity, teaching signals versus error signals transmitted by climbing-fibers, sparse expansion coding by granule cells, and cerebellar internal models. In this review, we evaluate the different features of the three models based on recent computational and experimental studies. While acknowledging that the three models have greatly advanced our understanding of cerebellar control mechanisms in eye movements and classical conditioning, we propose a new direction for computational frameworks of the cerebellum. That is, hierarchical reinforcement learning with multiple internal models.
1707.06881
Denis Horv\'ath
Denis Horv\'ath, Branislav Brutovsky
A New Conceptual Framework for the Therapy by Optimized Multidimensional Pulses of Therapeutic Activity. The case of Multiple Myeloma Model
37 pages, 11 figures, 8 tables
null
null
null
q-bio.TO q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We developed simulation methodology to assess eventual therapeutic efficiency of exogenous multiparametric changes in a four-component cellular system described by the system of ordinary differential equations. The method is numerically implemented to simulate the temporal behavior of a cellular system of multiple myeloma cells. The problem is conceived as an inverse optimization task where the alternative temporal changes of selected parameters of the ordinary differential equations represent candidate solutions and the objective function quantifies the goals of the therapy. The system under study consists of two main cellular components, tumor cells and their cellular environment, respectively. The subset of model parameters closely related to the environment is substituted by exogenous time dependencies - therapeutic pulses combining continuous functions and discrete parameters subordinated thereafter to the optimization. Synergistic interaction of temporal parametric changes has been observed and quantified whereby two or more dynamic parameters show effects that absent if either parameter is stimulated alone. We expect that the theoretical insight into unstable tumor growth provided by the sensitivity and optimization studies could, eventually, help in designing combination therapies.
[ { "created": "Fri, 21 Jul 2017 13:02:43 GMT", "version": "v1" }, { "created": "Mon, 4 Sep 2017 09:31:10 GMT", "version": "v2" }, { "created": "Thu, 25 Jan 2018 17:52:50 GMT", "version": "v3" }, { "created": "Sat, 2 Jun 2018 07:56:19 GMT", "version": "v4" } ]
2018-06-05
[ [ "Horváth", "Denis", "" ], [ "Brutovsky", "Branislav", "" ] ]
We developed simulation methodology to assess eventual therapeutic efficiency of exogenous multiparametric changes in a four-component cellular system described by the system of ordinary differential equations. The method is numerically implemented to simulate the temporal behavior of a cellular system of multiple myeloma cells. The problem is conceived as an inverse optimization task where the alternative temporal changes of selected parameters of the ordinary differential equations represent candidate solutions and the objective function quantifies the goals of the therapy. The system under study consists of two main cellular components, tumor cells and their cellular environment, respectively. The subset of model parameters closely related to the environment is substituted by exogenous time dependencies - therapeutic pulses combining continuous functions and discrete parameters subordinated thereafter to the optimization. Synergistic interaction of temporal parametric changes has been observed and quantified whereby two or more dynamic parameters show effects that absent if either parameter is stimulated alone. We expect that the theoretical insight into unstable tumor growth provided by the sensitivity and optimization studies could, eventually, help in designing combination therapies.
1512.00397
Ehsaneddin Asgari
Ehsaneddin Asgari, Kiavash Garakani and Mohammad R.K Mofrad
A New Approach for Scalable Analysis of Microbial Communities
null
null
null
null
q-bio.GN cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Microbial communities play important roles in the function and maintenance of various biosystems, ranging from human body to the environment. Current methods for analysis of microbial communities are typically based on taxonomic phylogenetic alignment using 16S rRNA metagenomic or Whole Genome Sequencing data. In typical characterizations of microbial communities, studies deal with billions of micobial sequences, aligning them to a phylogenetic tree. We introduce a new approach for the efficient analysis of microbial communities. Our new reference-free analysis tech- nique is based on n-gram sequence analysis of 16S rRNA data and reduces the processing data size dramatically (by 105 fold), without requiring taxonomic alignment. The proposed approach is applied to characterize phenotypic microbial community differ- ences in different settings. Specifically, we applied this approach in classification of microbial com- munities across different body sites, characterization of oral microbiomes associated with healthy and diseased individuals, and classification of microbial communities longitudinally during the develop- ment of infants. Different dimensionality reduction methods are introduced that offer a more scalable analysis framework, while minimizing the loss in classification accuracies. Among dimensionality re- duction techniques, we propose a continuous vector representation for microbial communities, which can widely be used for deep learning applications in microbial informatics.
[ { "created": "Tue, 1 Dec 2015 19:27:13 GMT", "version": "v1" } ]
2015-12-02
[ [ "Asgari", "Ehsaneddin", "" ], [ "Garakani", "Kiavash", "" ], [ "Mofrad", "Mohammad R. K", "" ] ]
Microbial communities play important roles in the function and maintenance of various biosystems, ranging from human body to the environment. Current methods for analysis of microbial communities are typically based on taxonomic phylogenetic alignment using 16S rRNA metagenomic or Whole Genome Sequencing data. In typical characterizations of microbial communities, studies deal with billions of micobial sequences, aligning them to a phylogenetic tree. We introduce a new approach for the efficient analysis of microbial communities. Our new reference-free analysis tech- nique is based on n-gram sequence analysis of 16S rRNA data and reduces the processing data size dramatically (by 105 fold), without requiring taxonomic alignment. The proposed approach is applied to characterize phenotypic microbial community differ- ences in different settings. Specifically, we applied this approach in classification of microbial com- munities across different body sites, characterization of oral microbiomes associated with healthy and diseased individuals, and classification of microbial communities longitudinally during the develop- ment of infants. Different dimensionality reduction methods are introduced that offer a more scalable analysis framework, while minimizing the loss in classification accuracies. Among dimensionality re- duction techniques, we propose a continuous vector representation for microbial communities, which can widely be used for deep learning applications in microbial informatics.
1611.00294
Tilo Schwalger
Tilo Schwalger, Moritz Deger and Wulfram Gerstner
Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size
Simulation code available from https://github.com/schwalger/mesopopdyn_gif
PLoS Comput. Biol., 13(4):e1005507, 2017
10.1371/journal.pcbi.1005507
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50 -- 2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics like finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly simulate a model of a local cortical microcircuit consisting of eight neuron types. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations.
[ { "created": "Tue, 1 Nov 2016 16:56:09 GMT", "version": "v1" }, { "created": "Mon, 7 Nov 2016 18:48:07 GMT", "version": "v2" }, { "created": "Fri, 21 Apr 2017 08:41:24 GMT", "version": "v3" } ]
2017-04-24
[ [ "Schwalger", "Tilo", "" ], [ "Deger", "Moritz", "" ], [ "Gerstner", "Wulfram", "" ] ]
Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50 -- 2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics like finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly simulate a model of a local cortical microcircuit consisting of eight neuron types. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations.
1302.6294
Carlos Pe\~na
Carlos Pe\~na and Marianne Espeland
Diversity dynamics in Nymphalidae butterflies: Effect of phylogenetic uncertainty on diversification rate shift estimates
23 pages, 7 figures, 2 tables and 12 supplementary material files. Both authors contributed equally to this work
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The family Nymphalidae is the largest family within the true butterflies and has been used to develop hypotheses explaining evolutionary interactions between plants and insects. Theories of insect and hostplant dynamics predict accelerated diversification in some scenarios. We investigated whether phylogenetic uncertainty affects a commonly used method (MEDUSA, modelling evolutionary diversity using stepwise AIC) for estimating shifts in diversification rates in lineages of the family Nymphalidae, by extending the method to run across a random sample of phylogenetic trees from the posterior distribution of a Bayesian run. We found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees from the posterior distribution can give diversification rates ranging from high values to almost zero for the same clade, and for some clades both significant rate increase and decrease were estimated. Only three out of 13 significant shifts found on the maximum credibility tree were consistent across more than 95% of the trees from the posterior: (i) accelerated diversification for Solanaceae feeders in the tribe Ithomiini; (ii) accelerated diversification in the genus Charaxes, and (iii) deceleration in the Danaina. By using the binary speciation and extinction model (BISSE), we found that a hostplant shift to Solanaceae or a codistributed character is responsible for the increase in diversification rate in Ithomiini, and the result is congruent with the diffuse cospeciation hypothesis. A shift to Apocynaceae is not responsible for the slowdown of diversification in Danaina. Our results show that taking phylogenetic uncertainty into account when estimating diversification rate shifts is of great importance, and relying on the maximum credibility tree alone potentially can give erroneous results.
[ { "created": "Tue, 26 Feb 2013 02:04:23 GMT", "version": "v1" } ]
2013-02-27
[ [ "Peña", "Carlos", "" ], [ "Espeland", "Marianne", "" ] ]
The family Nymphalidae is the largest family within the true butterflies and has been used to develop hypotheses explaining evolutionary interactions between plants and insects. Theories of insect and hostplant dynamics predict accelerated diversification in some scenarios. We investigated whether phylogenetic uncertainty affects a commonly used method (MEDUSA, modelling evolutionary diversity using stepwise AIC) for estimating shifts in diversification rates in lineages of the family Nymphalidae, by extending the method to run across a random sample of phylogenetic trees from the posterior distribution of a Bayesian run. We found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees from the posterior distribution can give diversification rates ranging from high values to almost zero for the same clade, and for some clades both significant rate increase and decrease were estimated. Only three out of 13 significant shifts found on the maximum credibility tree were consistent across more than 95% of the trees from the posterior: (i) accelerated diversification for Solanaceae feeders in the tribe Ithomiini; (ii) accelerated diversification in the genus Charaxes, and (iii) deceleration in the Danaina. By using the binary speciation and extinction model (BISSE), we found that a hostplant shift to Solanaceae or a codistributed character is responsible for the increase in diversification rate in Ithomiini, and the result is congruent with the diffuse cospeciation hypothesis. A shift to Apocynaceae is not responsible for the slowdown of diversification in Danaina. Our results show that taking phylogenetic uncertainty into account when estimating diversification rate shifts is of great importance, and relying on the maximum credibility tree alone potentially can give erroneous results.
1202.4214
Chengcheng Ji
Chengcheng Ji, Liang Wu, Wenchan Zhao, Sishuo Wang, Jianhao Lv
Echinoderms have bilateral tendencies
null
PLoS ONE 7(1): e28978 (2012)
10.1371/journal.pone.0028978
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Echinoderms take many forms of symmetry. Pentameral symmetry is the major form and the other forms are derived from it. However, the ancestors of echinoderms, which originated from Cambrian period, were believed to be bilaterians. Echinoderm larvae are bilateral during their early development. During embryonic development of starfish and sea urchins, the position and the developmental sequence of each arm are fixed, implying an auxological anterior/posterior axis. Starfish also possess the Hox gene cluster, which controls symmetrical development. Overall, echinoderms are thought to have a bilateral developmental mechanism and process. In this article, we focused on adult starfish behaviors to corroborate its bilateral tendency. We weighed their central disk and each arm to measure the position of the center of gravity. We then studied their turning-over behavior, crawling behavior and fleeing behavior statistically to obtain the center of frequency of each behavior. By joining the center of gravity and each center of frequency, we obtained three behavioral symmetric planes. These behavioral bilateral tendencies might be related to the A/P axis during the embryonic development of the starfish. It is very likely that the adult starfish is, to some extent, bilaterian because it displays some bilateral propensity and has a definite behavioral symmetric plane. The remainder of bilateral symmetry may have benefited echinoderms during their evolution from the Cambrian period to the present.
[ { "created": "Mon, 20 Feb 2012 03:32:08 GMT", "version": "v1" } ]
2015-06-04
[ [ "Ji", "Chengcheng", "" ], [ "Wu", "Liang", "" ], [ "Zhao", "Wenchan", "" ], [ "Wang", "Sishuo", "" ], [ "Lv", "Jianhao", "" ] ]
Echinoderms take many forms of symmetry. Pentameral symmetry is the major form and the other forms are derived from it. However, the ancestors of echinoderms, which originated from Cambrian period, were believed to be bilaterians. Echinoderm larvae are bilateral during their early development. During embryonic development of starfish and sea urchins, the position and the developmental sequence of each arm are fixed, implying an auxological anterior/posterior axis. Starfish also possess the Hox gene cluster, which controls symmetrical development. Overall, echinoderms are thought to have a bilateral developmental mechanism and process. In this article, we focused on adult starfish behaviors to corroborate its bilateral tendency. We weighed their central disk and each arm to measure the position of the center of gravity. We then studied their turning-over behavior, crawling behavior and fleeing behavior statistically to obtain the center of frequency of each behavior. By joining the center of gravity and each center of frequency, we obtained three behavioral symmetric planes. These behavioral bilateral tendencies might be related to the A/P axis during the embryonic development of the starfish. It is very likely that the adult starfish is, to some extent, bilaterian because it displays some bilateral propensity and has a definite behavioral symmetric plane. The remainder of bilateral symmetry may have benefited echinoderms during their evolution from the Cambrian period to the present.
1011.4902
Andrey Shapkin
A. G. Shapkin, M. V. Taborov, Yu. G. Shapkin
Recording and Reproduction of Pattern Memory Trace in EEG by Direct Electrical Stimulation of Brain Cortex
Article: 9 pages, 3 figures
Bulletin of ESCC SB RAMS, 2011, No.4(80), part 1, p. 289-294 (in Russian)
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This study demonstrates the capability of external signal recording into memory and the reproduction of memory trace of this pattern in EEG by direct AC electrical stimulation of rat cerebral cortex. Additionally, we examine shifts of the DC potential level related to these phenomena. We show that in the course of memory trace reproduction, consecutive phases of engram activation and relaxation are registered and accompanied by corresponding negative and positive DC shifts. The observed electrophysiological changes may reflect consecutive activation and inhibition phases of neural ensembles participating in engram formation.
[ { "created": "Mon, 22 Nov 2010 18:35:22 GMT", "version": "v1" }, { "created": "Tue, 22 Nov 2011 08:55:00 GMT", "version": "v2" } ]
2011-11-23
[ [ "Shapkin", "A. G.", "" ], [ "Taborov", "M. V.", "" ], [ "Shapkin", "Yu. G.", "" ] ]
This study demonstrates the capability of external signal recording into memory and the reproduction of memory trace of this pattern in EEG by direct AC electrical stimulation of rat cerebral cortex. Additionally, we examine shifts of the DC potential level related to these phenomena. We show that in the course of memory trace reproduction, consecutive phases of engram activation and relaxation are registered and accompanied by corresponding negative and positive DC shifts. The observed electrophysiological changes may reflect consecutive activation and inhibition phases of neural ensembles participating in engram formation.
1304.7214
George Bass Ph.D.
George E. Bass and James E. Chenevey
Stimulation of Enzyme Reaction Rates by Crystalline Substrate Irradiation: Dependence on Identity of Irradiated Substance
arXiv admin note: text overlap with arXiv:0706.1748
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The study reported here concerns a phenomenon, discovered and extensively investigated by Sorin Comorosan, wherein enzyme initial reaction rates are enhanced as a consequence of incorporation of solutions derived from previously irradiated crystalline material into the reaction medium. Effective irradiation times conform to a sharply oscillatory pattern. In most reports, the irradiated crystalline material has been the substrate for the enzyme reaction to be studied, but there have been exceptions. The experiments presented here serve to confirm and extend this latter aspect of the phenomenon. It is found that the initial reaction rates for the lactic acid dehydrogenase (LDH) conversion of pyruvate to lactate can be stimulated by irradiation of crystalline deposits of sodium chloride, sodium bromide, potassium chloride and diatomaceous earth. Similarly, stimulation of the LDH conversion of lactate to pyruvate is demonstrated for irradiated sodium chloride. There appears to be no required chemical feature of the irradiated material other than crystalline state.
[ { "created": "Fri, 26 Apr 2013 16:14:53 GMT", "version": "v1" } ]
2013-04-29
[ [ "Bass", "George E.", "" ], [ "Chenevey", "James E.", "" ] ]
The study reported here concerns a phenomenon, discovered and extensively investigated by Sorin Comorosan, wherein enzyme initial reaction rates are enhanced as a consequence of incorporation of solutions derived from previously irradiated crystalline material into the reaction medium. Effective irradiation times conform to a sharply oscillatory pattern. In most reports, the irradiated crystalline material has been the substrate for the enzyme reaction to be studied, but there have been exceptions. The experiments presented here serve to confirm and extend this latter aspect of the phenomenon. It is found that the initial reaction rates for the lactic acid dehydrogenase (LDH) conversion of pyruvate to lactate can be stimulated by irradiation of crystalline deposits of sodium chloride, sodium bromide, potassium chloride and diatomaceous earth. Similarly, stimulation of the LDH conversion of lactate to pyruvate is demonstrated for irradiated sodium chloride. There appears to be no required chemical feature of the irradiated material other than crystalline state.
1306.1439
Christophe Guyeux
Jacques M. Bahi, Christophe Guyeux, Jean-Marc Nicod, Laurent Philippe
Protein structure prediction software generate two different sets of conformations. Or the study of unfolded self-avoiding walks
Under submission
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Self-avoiding walks (SAW) are the source of very difficult problems in probabilities and enumerative combinatorics. They are also of great interest as they are, for instance, the basis of protein structure prediction in bioinformatics. Authors of this article have previously shown that, depending on the prediction algorithm, the sets of obtained conformations differ: all the self-avoiding walks can be reached using stretching-based algorithms whereas only the folded SAWs can be attained with methods that iteratively fold the straight line. A first study of (un)folded self-avoiding walks is presented in this article. The contribution is majorly a survey of what is currently known about these sets. In particular we provide clear definitions of various subsets of self-avoiding walks related to pivot moves (folded or unfoldable SAWs, etc.) and the first results we have obtained, theoretically or computationally, on these sets. A list of open questions is provided too, and the consequences on the protein structure prediction problem is finally investigated.
[ { "created": "Thu, 6 Jun 2013 15:34:00 GMT", "version": "v1" } ]
2013-06-07
[ [ "Bahi", "Jacques M.", "" ], [ "Guyeux", "Christophe", "" ], [ "Nicod", "Jean-Marc", "" ], [ "Philippe", "Laurent", "" ] ]
Self-avoiding walks (SAW) are the source of very difficult problems in probabilities and enumerative combinatorics. They are also of great interest as they are, for instance, the basis of protein structure prediction in bioinformatics. Authors of this article have previously shown that, depending on the prediction algorithm, the sets of obtained conformations differ: all the self-avoiding walks can be reached using stretching-based algorithms whereas only the folded SAWs can be attained with methods that iteratively fold the straight line. A first study of (un)folded self-avoiding walks is presented in this article. The contribution is majorly a survey of what is currently known about these sets. In particular we provide clear definitions of various subsets of self-avoiding walks related to pivot moves (folded or unfoldable SAWs, etc.) and the first results we have obtained, theoretically or computationally, on these sets. A list of open questions is provided too, and the consequences on the protein structure prediction problem is finally investigated.
2004.07334
Nils Bertschinger
Nils Bertschinger
Visual explanation of country specific differences in Covid-19 dynamics
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This report provides a visual examination of Covid-19 case and death data. In particular, it shows that country specific differences can too a large extend be explained by two easily interpreted parameters. Namely, the delay between reported cases and deaths and the fraction of cases observed. Furthermore, this allows to lower bound the actual total number of people already infected.
[ { "created": "Wed, 15 Apr 2020 20:46:51 GMT", "version": "v1" } ]
2020-04-17
[ [ "Bertschinger", "Nils", "" ] ]
This report provides a visual examination of Covid-19 case and death data. In particular, it shows that country specific differences can too a large extend be explained by two easily interpreted parameters. Namely, the delay between reported cases and deaths and the fraction of cases observed. Furthermore, this allows to lower bound the actual total number of people already infected.
2003.12954
Eric Jones
Eric W. Jones, Parker Shankin-Clarke, and Jean M. Carlson
Navigation and control of outcomes in a generalized Lotka-Volterra model of the microbiome
24 pages, 10 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The generalized Lotka-Volterra (gLV) equations model the microbiome as a collection of interacting ecological species. Here we use a particular experimentally-derived gLV model of C. difficile infection (CDI) as a case study to generate methods that are applicable to generic gLV models. We examine how to transition gLV systems between multiple steady states through the application of direct control protocols, which alter the state of the system via the instantaneous addition or subtraction of microbial species. Then, the geometry of the basins of attraction of point attractors is compressed into an attractor network, which decomposes a multistable high-dimensional landscape into web of bistable subsystems. This attractor network is used to identify efficient (total intervention volume minimizing) protocols that drive the system from one basin to another. In some cases, the most efficient control protocol is circuitous and will take the system through intermediate steady states with sequential interventions. Clinically, the efficient control of the microbiome has pertinent applications for bacteriotherapies, which seek to remedy microbiome-affiliated diseases by directly altering the composition of the gut microbiome.
[ { "created": "Sun, 29 Mar 2020 06:27:16 GMT", "version": "v1" }, { "created": "Thu, 23 Jul 2020 03:13:19 GMT", "version": "v2" } ]
2020-07-24
[ [ "Jones", "Eric W.", "" ], [ "Shankin-Clarke", "Parker", "" ], [ "Carlson", "Jean M.", "" ] ]
The generalized Lotka-Volterra (gLV) equations model the microbiome as a collection of interacting ecological species. Here we use a particular experimentally-derived gLV model of C. difficile infection (CDI) as a case study to generate methods that are applicable to generic gLV models. We examine how to transition gLV systems between multiple steady states through the application of direct control protocols, which alter the state of the system via the instantaneous addition or subtraction of microbial species. Then, the geometry of the basins of attraction of point attractors is compressed into an attractor network, which decomposes a multistable high-dimensional landscape into web of bistable subsystems. This attractor network is used to identify efficient (total intervention volume minimizing) protocols that drive the system from one basin to another. In some cases, the most efficient control protocol is circuitous and will take the system through intermediate steady states with sequential interventions. Clinically, the efficient control of the microbiome has pertinent applications for bacteriotherapies, which seek to remedy microbiome-affiliated diseases by directly altering the composition of the gut microbiome.
1011.0825
Etienne Joly
Etienne Joly (IPBS)
The existence of species rests on a metastable equilibrium between inbreeding and outbreeding. An essay on the close relationship between speciation, inbreeding and recessive mutations
52 pages
Biology Direct 6 (2011) 62
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Speciation corresponds to the progressive establishment of reproductive barriers between groups of individuals derived from an ancestral stock. Since Darwin did not believe that reproductive barriers could be selected for, he proposed that most events of speciation would occur through a process of separation and divergence, and this point of view is still shared by most evolutionary biologists today. Results: I do, however, contend that, if so much speciation occurs, the most likely explanation is that there must be conditions where reproductive barriers can be directly selected for. In other words, situations where it is advantageous for individuals to reproduce preferentially within a small group and reduce their breeding with the rest of the ancestral population. This leads me to propose a model whereby new species arise not by populations splitting into separate branches, but by small inbreeding groups "budding" from an ancestral stock. This would be driven by several advantages of inbreeding, and mainly by advantageous recessive phenotypes, which could only be retained in the context of inbreeding. Reproductive barriers would thus not arise as secondary consequences of divergent evolution in populations isolated from one another, but under the direct selective pressure of ancestral stocks. Many documented cases of speciation in natural populations appear to fit the model proposed, with more speciation occurring in populations with high inbreeding coefficients, and many recessive characters identified as central to the phenomenon of speciation, with these recessive mutations expected to be surrounded by patterns of limited genomic diversity. Conclusions: Whilst adaptive evolution would correspond to gains of function that would, most of the time, be dominant, this type of speciation by budding would thus be driven by mutations resulting in the advantageous loss of certain functions since recessive mutations very often correspond to the inactivation of a gene. A very important further advantage of inbreeding is that it reduces the accumulation of recessive mutations in genomes. A consequence of the model proposed is that the existence of species would correspond to a metastable equilibrium between inbreeding and outbreeding, with excessive inbreeding promoting speciation, and excessive outbreeding resulting in irreversible accumulation of recessive mutations that could ultimately only lead to extinction.
[ { "created": "Wed, 3 Nov 2010 08:49:21 GMT", "version": "v1" }, { "created": "Sun, 28 Nov 2010 13:00:27 GMT", "version": "v2" }, { "created": "Sat, 13 Aug 2011 06:46:22 GMT", "version": "v3" }, { "created": "Fri, 9 Dec 2011 13:15:46 GMT", "version": "v4" } ]
2011-12-12
[ [ "Joly", "Etienne", "", "IPBS" ] ]
Background: Speciation corresponds to the progressive establishment of reproductive barriers between groups of individuals derived from an ancestral stock. Since Darwin did not believe that reproductive barriers could be selected for, he proposed that most events of speciation would occur through a process of separation and divergence, and this point of view is still shared by most evolutionary biologists today. Results: I do, however, contend that, if so much speciation occurs, the most likely explanation is that there must be conditions where reproductive barriers can be directly selected for. In other words, situations where it is advantageous for individuals to reproduce preferentially within a small group and reduce their breeding with the rest of the ancestral population. This leads me to propose a model whereby new species arise not by populations splitting into separate branches, but by small inbreeding groups "budding" from an ancestral stock. This would be driven by several advantages of inbreeding, and mainly by advantageous recessive phenotypes, which could only be retained in the context of inbreeding. Reproductive barriers would thus not arise as secondary consequences of divergent evolution in populations isolated from one another, but under the direct selective pressure of ancestral stocks. Many documented cases of speciation in natural populations appear to fit the model proposed, with more speciation occurring in populations with high inbreeding coefficients, and many recessive characters identified as central to the phenomenon of speciation, with these recessive mutations expected to be surrounded by patterns of limited genomic diversity. Conclusions: Whilst adaptive evolution would correspond to gains of function that would, most of the time, be dominant, this type of speciation by budding would thus be driven by mutations resulting in the advantageous loss of certain functions since recessive mutations very often correspond to the inactivation of a gene. A very important further advantage of inbreeding is that it reduces the accumulation of recessive mutations in genomes. A consequence of the model proposed is that the existence of species would correspond to a metastable equilibrium between inbreeding and outbreeding, with excessive inbreeding promoting speciation, and excessive outbreeding resulting in irreversible accumulation of recessive mutations that could ultimately only lead to extinction.
2006.13334
Ian Leifer
Ian Leifer, Flaviano Morone, Saulo D. S. Reis, Jose S. Andrade Jr., Mariano Sigman, Hernan A. Makse
Circuits with broken fibration symmetries perform core logic computations in biological networks
null
PLoS Comput Biol 2020,16(6): e1007776
10.1371/journal.pcbi.1007776
null
q-bio.GN math.GR physics.bio-ph physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that logic computational circuits in gene regulatory networks arise from a fibration symmetry breaking in the network structure. From this idea we implement a constructive procedure that reveals a hierarchy of genetic circuits, ubiquitous across species, that are surprising analogues to the emblematic circuits of solid-state electronics: starting from the transistor and progressing to ring oscillators, current-mirror circuits to toggle switches and flip-flops. These canonical variants serve fundamental operations of synchronization and clocks (in their symmetric states) and memory storage (in their broken symmetry states). These conclusions introduce a theoretically principled strategy to search for computational building blocks in biological networks, and present a systematic route to design synthetic biological circuits.
[ { "created": "Tue, 23 Jun 2020 21:10:28 GMT", "version": "v1" } ]
2020-06-25
[ [ "Leifer", "Ian", "" ], [ "Morone", "Flaviano", "" ], [ "Reis", "Saulo D. S.", "" ], [ "Andrade", "Jose S.", "Jr." ], [ "Sigman", "Mariano", "" ], [ "Makse", "Hernan A.", "" ] ]
We show that logic computational circuits in gene regulatory networks arise from a fibration symmetry breaking in the network structure. From this idea we implement a constructive procedure that reveals a hierarchy of genetic circuits, ubiquitous across species, that are surprising analogues to the emblematic circuits of solid-state electronics: starting from the transistor and progressing to ring oscillators, current-mirror circuits to toggle switches and flip-flops. These canonical variants serve fundamental operations of synchronization and clocks (in their symmetric states) and memory storage (in their broken symmetry states). These conclusions introduce a theoretically principled strategy to search for computational building blocks in biological networks, and present a systematic route to design synthetic biological circuits.
2303.06340
Shi-Ju Ran
Yu-Jia An, Sheng-Chen Bai, Lin Cheng, Xiao-Guang Li, Cheng-en Wang, Xiao-Dong Han, Gang Su, Shi-Ju Ran, Cong Wang
Intelligent diagnostic scheme for lung cancer screening with Raman spectra data by tensor network machine learning
10 pages, 7 figures
null
null
null
q-bio.QM cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artificial intelligence (AI) has brought tremendous impacts on biomedical sciences from academic researches to clinical applications, such as in biomarkers' detection and diagnosis, optimization of treatment, and identification of new therapeutic targets in drug discovery. However, the contemporary AI technologies, particularly deep machine learning (ML), severely suffer from non-interpretability, which might uncontrollably lead to incorrect predictions. Interpretability is particularly crucial to ML for clinical diagnosis as the consumers must gain necessary sense of security and trust from firm grounds or convincing interpretations. In this work, we propose a tensor-network (TN)-ML method to reliably predict lung cancer patients and their stages via screening Raman spectra data of Volatile organic compounds (VOCs) in exhaled breath, which are generally suitable as biomarkers and are considered to be an ideal way for non-invasive lung cancer screening. The prediction of TN-ML is based on the mutual distances of the breath samples mapped to the quantum Hilbert space. Thanks to the quantum probabilistic interpretation, the certainty of the predictions can be quantitatively characterized. The accuracy of the samples with high certainty is almost 100$\%$. The incorrectly-classified samples exhibit obviously lower certainty, and thus can be decipherably identified as anomalies, which will be handled by human experts to guarantee high reliability. Our work sheds light on shifting the ``AI for biomedical sciences'' from the conventional non-interpretable ML schemes to the interpretable human-ML interactive approaches, for the purpose of high accuracy and reliability.
[ { "created": "Sat, 11 Mar 2023 07:57:37 GMT", "version": "v1" } ]
2023-03-14
[ [ "An", "Yu-Jia", "" ], [ "Bai", "Sheng-Chen", "" ], [ "Cheng", "Lin", "" ], [ "Li", "Xiao-Guang", "" ], [ "Wang", "Cheng-en", "" ], [ "Han", "Xiao-Dong", "" ], [ "Su", "Gang", "" ], [ "Ran", "Shi...
Artificial intelligence (AI) has brought tremendous impacts on biomedical sciences from academic researches to clinical applications, such as in biomarkers' detection and diagnosis, optimization of treatment, and identification of new therapeutic targets in drug discovery. However, the contemporary AI technologies, particularly deep machine learning (ML), severely suffer from non-interpretability, which might uncontrollably lead to incorrect predictions. Interpretability is particularly crucial to ML for clinical diagnosis as the consumers must gain necessary sense of security and trust from firm grounds or convincing interpretations. In this work, we propose a tensor-network (TN)-ML method to reliably predict lung cancer patients and their stages via screening Raman spectra data of Volatile organic compounds (VOCs) in exhaled breath, which are generally suitable as biomarkers and are considered to be an ideal way for non-invasive lung cancer screening. The prediction of TN-ML is based on the mutual distances of the breath samples mapped to the quantum Hilbert space. Thanks to the quantum probabilistic interpretation, the certainty of the predictions can be quantitatively characterized. The accuracy of the samples with high certainty is almost 100$\%$. The incorrectly-classified samples exhibit obviously lower certainty, and thus can be decipherably identified as anomalies, which will be handled by human experts to guarantee high reliability. Our work sheds light on shifting the ``AI for biomedical sciences'' from the conventional non-interpretable ML schemes to the interpretable human-ML interactive approaches, for the purpose of high accuracy and reliability.
1303.0805
Richard A Neher
Fabio Zanini and Richard A. Neher
Deleterious synonymous mutations hitchhike to high frequency in HIV-1 env evolution
null
null
10.1128/JVI.01529-13
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Intrapatient HIV-1 evolution is dominated by selection on the protein level in the arms race with the adaptive immune system. When cytotoxic CD8+ T-cells or neutralizing antibodies target a new epitope, the virus often escapes via nonsynonymous mutations that impair recognition. Synonymous mutations do not affect this interplay and are often assumed to be neutral. We analyze longitudinal intrapatient data from the C2-V5 part of the envelope gene (env) and observe that synonymous derived alleles rarely fix even though they often reach high frequencies in the viral population. We find that synonymous mutations that disrupt base pairs in RNA stems flanking the variable loops of gp120 are more likely to be lost than other synonymous changes, hinting at a direct fitness effect of these stem-loop structures in the HIV-1 RNA. Computational modeling indicates that these synonymous mutations have a (Malthusian) selection coefficient of the order of -0.002 and that they are brought up to high frequency by hitchhiking on neighboring beneficial nonsynonymous alleles. The patterns of fixation of nonsynonymous mutations estimated from the longitudinal data and comparisons with computer models suggest that escape mutations in C2-V5 are only transiently beneficial, either because the immune system is catching up or because of competition between equivalent escapes.
[ { "created": "Mon, 4 Mar 2013 19:41:41 GMT", "version": "v1" } ]
2014-03-25
[ [ "Zanini", "Fabio", "" ], [ "Neher", "Richard A.", "" ] ]
Intrapatient HIV-1 evolution is dominated by selection on the protein level in the arms race with the adaptive immune system. When cytotoxic CD8+ T-cells or neutralizing antibodies target a new epitope, the virus often escapes via nonsynonymous mutations that impair recognition. Synonymous mutations do not affect this interplay and are often assumed to be neutral. We analyze longitudinal intrapatient data from the C2-V5 part of the envelope gene (env) and observe that synonymous derived alleles rarely fix even though they often reach high frequencies in the viral population. We find that synonymous mutations that disrupt base pairs in RNA stems flanking the variable loops of gp120 are more likely to be lost than other synonymous changes, hinting at a direct fitness effect of these stem-loop structures in the HIV-1 RNA. Computational modeling indicates that these synonymous mutations have a (Malthusian) selection coefficient of the order of -0.002 and that they are brought up to high frequency by hitchhiking on neighboring beneficial nonsynonymous alleles. The patterns of fixation of nonsynonymous mutations estimated from the longitudinal data and comparisons with computer models suggest that escape mutations in C2-V5 are only transiently beneficial, either because the immune system is catching up or because of competition between equivalent escapes.
2106.02085
Kelly Iarosz
Antonio M Batista, Silvio L T Souza, Kelly C Iarosz, Alexandre C L Almeida, Jos\'e D Szezech, Enrique C Gabrick, Michele Mugnaine, Gefferson L dos Santos, Iber\^e L Caldas
Simulation of deterministic compartmental models for infectious diseases dynamics
null
null
null
null
q-bio.PE physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Infectious diseases are caused by pathogenic microorganisms and can spread through different ways. Mathematical models and computational simulation have been used extensively to investigate the transmission and spread of infectious diseases. In other words, mathematical model simulation can be used to analyse the dynamics of infectious diseases, aiming to understand the effects and how to control the spread. In general, these models are based on compartments, where each compartment contains individuals with the same characteristics, such as susceptible, exposed, infected, and recovered. In this paper, we cast further light on some classical epidemic models, reporting possible outcomes from numerical simulation. Furthermore, we provide routines in a repository for simulations.
[ { "created": "Thu, 3 Jun 2021 19:02:18 GMT", "version": "v1" } ]
2021-06-07
[ [ "Batista", "Antonio M", "" ], [ "Souza", "Silvio L T", "" ], [ "Iarosz", "Kelly C", "" ], [ "Almeida", "Alexandre C L", "" ], [ "Szezech", "José D", "" ], [ "Gabrick", "Enrique C", "" ], [ "Mugnaine", "Michele"...
Infectious diseases are caused by pathogenic microorganisms and can spread through different ways. Mathematical models and computational simulation have been used extensively to investigate the transmission and spread of infectious diseases. In other words, mathematical model simulation can be used to analyse the dynamics of infectious diseases, aiming to understand the effects and how to control the spread. In general, these models are based on compartments, where each compartment contains individuals with the same characteristics, such as susceptible, exposed, infected, and recovered. In this paper, we cast further light on some classical epidemic models, reporting possible outcomes from numerical simulation. Furthermore, we provide routines in a repository for simulations.
2011.10657
Mohammad Ali Moni
Sakifa Aktar, Md. Martuza Ahamad, Md. Rashed-Al-Mahfuz, AKM Azad, Shahadat Uddin, A H M Kamal, Salem A. Alyami, Ping-I Lin, Sheikh Mohammed Shariful Islam, Julian M.W. Quinn, Valsamma Eapen, and Mohammad Ali Moni
Predicting Patient COVID-19 Disease Severity by means of Statistical and Machine Learning Analysis of Blood Cell Transcriptome Data
null
JMIR Med Inform 2021;9(4):e25884, PMID: 33779565
10.2196/25884
JMIR ms#25884
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
Introduction: For COVID-19 patients accurate prediction of disease severity and mortality risk would greatly improve care delivery and resource allocation. There are many patient-related factors, such as pre-existing comorbidities that affect disease severity. Since rapid automated profiling of peripheral blood samples is widely available, we investigated how such data from the peripheral blood of COVID-19 patients might be used to predict clinical outcomes. Methods: We thus investigated such clinical datasets from COVID-19 patients with known outcomes by combining statistical comparison and correlation methods with machine learning algorithms; the latter included decision tree, random forest, variants of gradient boosting machine, support vector machine, K-nearest neighbour and deep learning methods. Results: Our work revealed several clinical parameters measurable in blood samples, which discriminated between healthy people and COVID-19 positive patients and showed predictive value for later severity of COVID-19 symptoms. We thus developed a number of analytic methods that showed accuracy and precision for disease severity and mortality outcome predictions that were above 90%. Conclusions: In sum, we developed methodologies to analyse patient routine clinical data which enables more accurate prediction of COVID-19 patient outcomes. This type of approaches could, by employing standard hospital laboratory analyses of patient blood, be utilised to identify, COVID-19 patients at high risk of mortality and so enable their treatment to be optimised.
[ { "created": "Thu, 19 Nov 2020 10:32:46 GMT", "version": "v1" } ]
2021-04-20
[ [ "Aktar", "Sakifa", "" ], [ "Ahamad", "Md. Martuza", "" ], [ "Rashed-Al-Mahfuz", "Md.", "" ], [ "Azad", "AKM", "" ], [ "Uddin", "Shahadat", "" ], [ "Kamal", "A H M", "" ], [ "Alyami", "Salem A.", "" ], [...
Introduction: For COVID-19 patients accurate prediction of disease severity and mortality risk would greatly improve care delivery and resource allocation. There are many patient-related factors, such as pre-existing comorbidities that affect disease severity. Since rapid automated profiling of peripheral blood samples is widely available, we investigated how such data from the peripheral blood of COVID-19 patients might be used to predict clinical outcomes. Methods: We thus investigated such clinical datasets from COVID-19 patients with known outcomes by combining statistical comparison and correlation methods with machine learning algorithms; the latter included decision tree, random forest, variants of gradient boosting machine, support vector machine, K-nearest neighbour and deep learning methods. Results: Our work revealed several clinical parameters measurable in blood samples, which discriminated between healthy people and COVID-19 positive patients and showed predictive value for later severity of COVID-19 symptoms. We thus developed a number of analytic methods that showed accuracy and precision for disease severity and mortality outcome predictions that were above 90%. Conclusions: In sum, we developed methodologies to analyse patient routine clinical data which enables more accurate prediction of COVID-19 patient outcomes. This type of approaches could, by employing standard hospital laboratory analyses of patient blood, be utilised to identify, COVID-19 patients at high risk of mortality and so enable their treatment to be optimised.
2207.13141
Chen Li
Yifeng Zhang, Qihan Xuan, Qiyuan Fu, Chen Li
Simulation of snakes using vertical body bending to traverse terrain with large height variation
null
null
null
null
q-bio.QM physics.bio-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
Snake moves across various terrains by bending its elongated body. Recent studies discovered that snakes can use vertical bending to traverse terrain of large height variation, such as horizontally oriented cylinders, a wedge (Jurestovsky, Usher, Astley, 2021, J. Exp. Biol.), and uneven terrain (Fu & Li, 2020, Roy. Soc. Open Sci.; Fu, Astley, Li, 2022 Bioinspiration & Biomimetics). Here, to understand how vertical bending generates propulsion, we developed a dynamic simulation of a snake traversing a wedge (height = 0.05 body length, slope = 27 degrees) and a half cylindrical obstacle (height = 0.1 body length). By propagating down the body an internal torque profile with a maximum around the obstacle, the simulated snake moved forward as observed in the animal. Remarkably, even when frictional drag is low (snake-terrain kinetic friction coefficient of 0.20), the body must push against the wedge with a pressure 5 times that from body weight to generate sufficient forward propulsion to move forward. This indicated that snakes are highly capable of bending vertically to push against the environment to generate propulsion. Testing different controllers revealed that contact force feedback further helps generate and maintain propulsion effectively under unknown terrain perturbations.
[ { "created": "Tue, 26 Jul 2022 18:41:45 GMT", "version": "v1" } ]
2022-07-28
[ [ "Zhang", "Yifeng", "" ], [ "Xuan", "Qihan", "" ], [ "Fu", "Qiyuan", "" ], [ "Li", "Chen", "" ] ]
Snake moves across various terrains by bending its elongated body. Recent studies discovered that snakes can use vertical bending to traverse terrain of large height variation, such as horizontally oriented cylinders, a wedge (Jurestovsky, Usher, Astley, 2021, J. Exp. Biol.), and uneven terrain (Fu & Li, 2020, Roy. Soc. Open Sci.; Fu, Astley, Li, 2022 Bioinspiration & Biomimetics). Here, to understand how vertical bending generates propulsion, we developed a dynamic simulation of a snake traversing a wedge (height = 0.05 body length, slope = 27 degrees) and a half cylindrical obstacle (height = 0.1 body length). By propagating down the body an internal torque profile with a maximum around the obstacle, the simulated snake moved forward as observed in the animal. Remarkably, even when frictional drag is low (snake-terrain kinetic friction coefficient of 0.20), the body must push against the wedge with a pressure 5 times that from body weight to generate sufficient forward propulsion to move forward. This indicated that snakes are highly capable of bending vertically to push against the environment to generate propulsion. Testing different controllers revealed that contact force feedback further helps generate and maintain propulsion effectively under unknown terrain perturbations.
1611.09212
Riccardo Franco
Riccardo Franco
Towards a new quantum cognition model
null
null
null
null
q-bio.NC cs.AI quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article presents a new quantum-like model for cognition explicitly based on knowledge. It is shown that this model, called QKT (quantum knowledge-based theory), is able to coherently describe some experimental results that are problematic for the prior quantum-like decision models. In particular, I consider the experimental results relevant to the post-decision cognitive dissonance, the problems relevant to the question order effect and response replicability, and those relevant to the grand-reciprocity equations. A new set of postulates is proposed, which evidence the different meaning given to the projectors and to the quantum states. In the final part, I show that the use of quantum gates can help to better describe and understand the evolution of quantum-like models.
[ { "created": "Wed, 23 Nov 2016 23:17:10 GMT", "version": "v1" } ]
2016-11-29
[ [ "Franco", "Riccardo", "" ] ]
This article presents a new quantum-like model for cognition explicitly based on knowledge. It is shown that this model, called QKT (quantum knowledge-based theory), is able to coherently describe some experimental results that are problematic for the prior quantum-like decision models. In particular, I consider the experimental results relevant to the post-decision cognitive dissonance, the problems relevant to the question order effect and response replicability, and those relevant to the grand-reciprocity equations. A new set of postulates is proposed, which evidence the different meaning given to the projectors and to the quantum states. In the final part, I show that the use of quantum gates can help to better describe and understand the evolution of quantum-like models.
1906.02241
Yun Zhao
Yun Zhao, Elmer Guzman, Morgane Audouard, Zhuowei Cheng, PaulK. Hansma, Kenneth S. Kosik, and Linda Petzold
A Deep Learning Framework for Classification of in vitro Multi-Electrode Array Recordings
14 pages, in ICDM 2019
null
null
null
q-bio.NC cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-Electrode Arrays (MEAs) have been widely used to record neuronal activities, which could be used in the diagnosis of gene defects and drug effects. In this paper, we address the problem of classifying in vitro MEA recordings of mouse and human neuronal cultures from different genotypes, where there is no easy way to directly utilize raw sequences as inputs to train an end-to-end classification model. While carefully extracting some features by hand could partially solve the problem, this approach suffers from obvious drawbacks such as difficulty of generalizing. We propose a deep learning framework to address this challenge. Our approach correctly classifies neuronal culture data prepared from two different genotypes -- a mouse Knockout of the delta-catenin gene and human induced Pluripotent Stem Cell-derived neurons from Williams syndrome. By splitting the long recordings into short slices for training, and applying Consensus Prediction during testing, our deep learning approach improves the prediction accuracy by 16.69% compared with feature based Logistic Regression for mouse MEA recordings. We further achieve an accuracy of 95.91% using Consensus Prediction in one subset of mouse MEA recording data, which were all recorded at six days in vitro. As high-density MEA recordings become more widely available, this approach could be generalized for classification of neurons carrying different mutations and classification of drug responses.
[ { "created": "Wed, 5 Jun 2019 18:36:31 GMT", "version": "v1" } ]
2019-06-07
[ [ "Zhao", "Yun", "" ], [ "Guzman", "Elmer", "" ], [ "Audouard", "Morgane", "" ], [ "Cheng", "Zhuowei", "" ], [ "Hansma", "PaulK.", "" ], [ "Kosik", "Kenneth S.", "" ], [ "Petzold", "Linda", "" ] ]
Multi-Electrode Arrays (MEAs) have been widely used to record neuronal activities, which could be used in the diagnosis of gene defects and drug effects. In this paper, we address the problem of classifying in vitro MEA recordings of mouse and human neuronal cultures from different genotypes, where there is no easy way to directly utilize raw sequences as inputs to train an end-to-end classification model. While carefully extracting some features by hand could partially solve the problem, this approach suffers from obvious drawbacks such as difficulty of generalizing. We propose a deep learning framework to address this challenge. Our approach correctly classifies neuronal culture data prepared from two different genotypes -- a mouse Knockout of the delta-catenin gene and human induced Pluripotent Stem Cell-derived neurons from Williams syndrome. By splitting the long recordings into short slices for training, and applying Consensus Prediction during testing, our deep learning approach improves the prediction accuracy by 16.69% compared with feature based Logistic Regression for mouse MEA recordings. We further achieve an accuracy of 95.91% using Consensus Prediction in one subset of mouse MEA recording data, which were all recorded at six days in vitro. As high-density MEA recordings become more widely available, this approach could be generalized for classification of neurons carrying different mutations and classification of drug responses.
1708.09665
Peter Czuppon
Peter Czuppon and Arne Traulsen
Fixation probabilities in populations under demographic fluctuations
31 pages, 7 figures
Journal of Mathematical Biology, 2018
10.1007/s00285-018-1251-9
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
We study the fixation probability of a mutant type when introduced into a resident population. As opposed to the usual assumption of constant pop- ulation size, we allow for stochastically varying population sizes. This is implemented by a stochastic competitive Lotka-Volterra model. The compe- tition coefficients are interpreted in terms of inverse payoffs emerging from an evolutionary game. Since our study focuses on the impact of the competition values, we assume the same birth and death rates for both types. In this gen- eral framework, we derive an approximate formula for the fixation probability {\phi} of the mutant type under weak selection. The qualitative behavior of {\phi} when compared to the neutral scenario is governed by the invasion dynamics of an initially rare type. Higher payoffs when competing with the resident type yield higher values of {\phi}. Additionally, we investigate the influence of the remaining parameters and find an explicit dependence of {\phi} on the mixed equilibrium value of the corresponding deterministic system (given that the parameter values allow for its existence).
[ { "created": "Thu, 31 Aug 2017 11:13:20 GMT", "version": "v1" } ]
2018-06-08
[ [ "Czuppon", "Peter", "" ], [ "Traulsen", "Arne", "" ] ]
We study the fixation probability of a mutant type when introduced into a resident population. As opposed to the usual assumption of constant pop- ulation size, we allow for stochastically varying population sizes. This is implemented by a stochastic competitive Lotka-Volterra model. The compe- tition coefficients are interpreted in terms of inverse payoffs emerging from an evolutionary game. Since our study focuses on the impact of the competition values, we assume the same birth and death rates for both types. In this gen- eral framework, we derive an approximate formula for the fixation probability {\phi} of the mutant type under weak selection. The qualitative behavior of {\phi} when compared to the neutral scenario is governed by the invasion dynamics of an initially rare type. Higher payoffs when competing with the resident type yield higher values of {\phi}. Additionally, we investigate the influence of the remaining parameters and find an explicit dependence of {\phi} on the mixed equilibrium value of the corresponding deterministic system (given that the parameter values allow for its existence).
1708.01792
Abhishek Deshpande
Abhishek Deshpande and Thomas E. Ouldridge
High rates of fuel consumption are not required by insulating motifs to suppress retroactivity in biochemical circuits
26 pages, 19 figures, To appear in Engineering Biology
null
null
null
q-bio.MN physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Retroactivity arises when the coupling of a molecular network $\mathcal{U}$ to a downstream network $\mathcal{D}$ results in signal propagation back from $\mathcal{D}$ to $\mathcal{U}$. The phenomenon represents a breakdown in modularity of biochemical circuits and hampers the rational design of complex functional networks. Considering simple models of signal-transduction architectures, we demonstrate the strong dependence of retroactivity on the properties of the upstream system, and explore the cost and efficacy of fuel-consuming insulating motifs that can mitigate retroactive effects. We find that simple insulating motifs can suppress retroactivity at a low fuel cost by coupling only weakly to the upstream system $\mathcal{U}$. However, this design approach reduces the signalling network's robustness to perturbations from leak reactions, and potentially compromises its ability to respond to rapidly-varying signals.
[ { "created": "Sat, 5 Aug 2017 17:24:45 GMT", "version": "v1" }, { "created": "Thu, 10 Aug 2017 12:39:27 GMT", "version": "v2" }, { "created": "Fri, 11 Aug 2017 10:50:49 GMT", "version": "v3" }, { "created": "Tue, 7 Nov 2017 00:07:35 GMT", "version": "v4" } ]
2017-11-08
[ [ "Deshpande", "Abhishek", "" ], [ "Ouldridge", "Thomas E.", "" ] ]
Retroactivity arises when the coupling of a molecular network $\mathcal{U}$ to a downstream network $\mathcal{D}$ results in signal propagation back from $\mathcal{D}$ to $\mathcal{U}$. The phenomenon represents a breakdown in modularity of biochemical circuits and hampers the rational design of complex functional networks. Considering simple models of signal-transduction architectures, we demonstrate the strong dependence of retroactivity on the properties of the upstream system, and explore the cost and efficacy of fuel-consuming insulating motifs that can mitigate retroactive effects. We find that simple insulating motifs can suppress retroactivity at a low fuel cost by coupling only weakly to the upstream system $\mathcal{U}$. However, this design approach reduces the signalling network's robustness to perturbations from leak reactions, and potentially compromises its ability to respond to rapidly-varying signals.
1410.6455
Yong Kong
Yong Kong
Btrim: A fast, lightweight adapter and quality trimming program for next-generation sequencing technologies
8 pages, 1 figure
Genomics, 98, 152-153 (2001)
10.1016/j.ygeno.2011.05.009
null
q-bio.GN cs.CE cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Btrim is a fast and lightweight software to trim adapters and low quality regions in reads from ultra high-throughput next-generation sequencing machines. It also can reliably identify barcodes and assign the reads to the original samples. Based on a modified Myers's bit-vector dynamic programming algorithm, Btrim can handle indels in adapters and barcodes. It removes low quality regions and trims off adapters at both or either end of the reads. A typical trimming of 30M reads with two sets of adapter pairs can be done in about a minute with a small memory footprint. Btrim is a versatile stand-alone tool that can be used as the first step in virtually all next-generation sequence analysis pipelines. The program is available at \url{http://graphics.med.yale.edu/trim/}.
[ { "created": "Thu, 23 Oct 2014 18:56:10 GMT", "version": "v1" } ]
2024-05-28
[ [ "Kong", "Yong", "" ] ]
Btrim is a fast and lightweight software to trim adapters and low quality regions in reads from ultra high-throughput next-generation sequencing machines. It also can reliably identify barcodes and assign the reads to the original samples. Based on a modified Myers's bit-vector dynamic programming algorithm, Btrim can handle indels in adapters and barcodes. It removes low quality regions and trims off adapters at both or either end of the reads. A typical trimming of 30M reads with two sets of adapter pairs can be done in about a minute with a small memory footprint. Btrim is a versatile stand-alone tool that can be used as the first step in virtually all next-generation sequence analysis pipelines. The program is available at \url{http://graphics.med.yale.edu/trim/}.
0710.5665
Peter Hinow
Shizhen Emily Wang, Peter Hinow, Nicole Bryce, Alissa M. Weaver, Lourdes Estrada, Carlos L. Arteaga, Glenn F. Webb
A mathematical model quantifies proliferation and motility effects of TGF--$\beta$ on cancer cells
15 pages, 4 figures; to appear in Computational and Mathematical Methods in Medicine
Comput. Math. Methods Med. 10:71-83, 2009
null
null
q-bio.QM
null
Transforming growth factor (TGF) $\beta$ is known to have properties of both a tumor suppressor and a tumor promoter. While it inhibits cell proliferation, it also increases cell motility and decreases cell--cell adhesion. Coupling mathematical modeling and experiments, we investigate the growth and motility of oncogene--expressing human mammary epithelial cells under exposure to TGF--$\beta$. We use a version of the well--known Fisher--Kolmogorov equation, and prescribe a procedure for its parametrization. We quantify the simultaneous effects of TGF--$\beta$ to increase the tendency of individual cells and cell clusters to move randomly and to decrease overall population growth. We demonstrate that in experiments with TGF--$\beta$ treated cells \textit{in vitro}, TGF--$\beta$ increases cell motility by a factor of 2 and decreases cell proliferation by a factor of 1/2 in comparison with untreated cells.
[ { "created": "Tue, 30 Oct 2007 14:52:40 GMT", "version": "v1" }, { "created": "Sat, 9 Feb 2008 17:13:15 GMT", "version": "v2" }, { "created": "Thu, 1 May 2008 15:40:23 GMT", "version": "v3" } ]
2009-03-27
[ [ "Wang", "Shizhen Emily", "" ], [ "Hinow", "Peter", "" ], [ "Bryce", "Nicole", "" ], [ "Weaver", "Alissa M.", "" ], [ "Estrada", "Lourdes", "" ], [ "Arteaga", "Carlos L.", "" ], [ "Webb", "Glenn F.", "" ] ...
Transforming growth factor (TGF) $\beta$ is known to have properties of both a tumor suppressor and a tumor promoter. While it inhibits cell proliferation, it also increases cell motility and decreases cell--cell adhesion. Coupling mathematical modeling and experiments, we investigate the growth and motility of oncogene--expressing human mammary epithelial cells under exposure to TGF--$\beta$. We use a version of the well--known Fisher--Kolmogorov equation, and prescribe a procedure for its parametrization. We quantify the simultaneous effects of TGF--$\beta$ to increase the tendency of individual cells and cell clusters to move randomly and to decrease overall population growth. We demonstrate that in experiments with TGF--$\beta$ treated cells \textit{in vitro}, TGF--$\beta$ increases cell motility by a factor of 2 and decreases cell proliferation by a factor of 1/2 in comparison with untreated cells.
2311.00269
Dhaker Kroumi
Dhaker Kroumi and Sabin Lessard
Evolutionary game with stochastic payoffs in a finite island model
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
In this paper, we consider a two-player two-strategy game with random payoffs in a population subdivided into $d$ demes, each containing $N$ individuals at the beginning of any given generation and experiencing local extinction and recolonization with some fixed probability $m$ after reproduction and selection among offspring. Within each deme, offspring engage in random pairwise interactions, and the payoffs are assumed to have means and variances proportional to the inverse of the population size. By verifying the conditions given in Ethier and Nagylaki (1980) to approximate Markov chains with two time scales, we establish that the discrete-time evolutionary dynamics with $Nd$ generations as unit of time converges to a continuous-time diffusion as $d\rightarrow\infty$. The infinitesimal mean and variance of this diffusion are expressed in terms of the population-scaled means and variances of the payoffs besides identity-by-descent measures between offspring in the same deme in a neutral population. We show that the probability for a strategy to fix in the population starting from an initial frequency $(Nd)^{-1}$ generally increases as the payoffs to that strategy exhibit less variability or the payoffs to the other strategy more variability. As a result, differences in variability can make this fixation probability for cooperation larger than the corresponding one for defection. As the deme-scaled extinction rate $\nu=mN$ decreases for $N$ large enough and $m$ small enough, creating a higher level of identity among offspring within demes, the differences between the population-scaled variances of the payoffs for interacting offspring of different types increases this effect to a greater extent than the differences for interacting offspring of the same type.
[ { "created": "Wed, 1 Nov 2023 03:36:12 GMT", "version": "v1" } ]
2023-11-02
[ [ "Kroumi", "Dhaker", "" ], [ "Lessard", "Sabin", "" ] ]
In this paper, we consider a two-player two-strategy game with random payoffs in a population subdivided into $d$ demes, each containing $N$ individuals at the beginning of any given generation and experiencing local extinction and recolonization with some fixed probability $m$ after reproduction and selection among offspring. Within each deme, offspring engage in random pairwise interactions, and the payoffs are assumed to have means and variances proportional to the inverse of the population size. By verifying the conditions given in Ethier and Nagylaki (1980) to approximate Markov chains with two time scales, we establish that the discrete-time evolutionary dynamics with $Nd$ generations as unit of time converges to a continuous-time diffusion as $d\rightarrow\infty$. The infinitesimal mean and variance of this diffusion are expressed in terms of the population-scaled means and variances of the payoffs besides identity-by-descent measures between offspring in the same deme in a neutral population. We show that the probability for a strategy to fix in the population starting from an initial frequency $(Nd)^{-1}$ generally increases as the payoffs to that strategy exhibit less variability or the payoffs to the other strategy more variability. As a result, differences in variability can make this fixation probability for cooperation larger than the corresponding one for defection. As the deme-scaled extinction rate $\nu=mN$ decreases for $N$ large enough and $m$ small enough, creating a higher level of identity among offspring within demes, the differences between the population-scaled variances of the payoffs for interacting offspring of different types increases this effect to a greater extent than the differences for interacting offspring of the same type.
1211.7167
Ali R. Mohazab
Ali R. Mohazab and Steven S. Plotkin
Polymer uncrossing and knotting in protein folding, and their role in minimal folding pathways
null
null
10.1371/journal.pone.0053642
null
q-bio.BM cond-mat.soft physics.bio-ph
http://creativecommons.org/licenses/by-nc-sa/3.0/
We introduce a method for calculating the extent to which chain non-crossing is important in the most efficient, optimal trajectories or pathways for a protein to fold. This involves recording all unphysical crossing events of a ghost chain, and calculating the minimal uncrossing cost that would have been required to avoid such events. A depth-first tree search algorithm is applied to find minimal transformations to fold $\alpha$, $\beta$, $\alpha/\beta$, and knotted proteins. In all cases, the extra uncrossing/non-crossing distance is a small fraction of the total distance travelled by a ghost chain. Different structural classes may be distinguished by the amount of extra uncrossing distance, and the effectiveness of such discrimination is compared with other order parameters. It was seen that non-crossing distance over chain length provided the best discrimination between structural and kinetic classes. The scaling of non-crossing distance with chain length implies an inevitable crossover to entanglement-dominated folding mechanisms for sufficiently long chains. We further quantify the minimal folding pathways by collecting the sequence of uncrossing moves, which generally involve leg, loop, and elbow-like uncrossing moves, and rendering the collection of these moves over the unfolded ensemble as a multiple-transformation "alignment". The consensus minimal pathway is constructed and shown schematically for representative cases of an $\alpha$, $\beta$, and knotted protein. An overlap parameter is defined between pathways; we find that $\alpha$ proteins have minimal overlap indicating diverse folding pathways, knotted proteins are highly constrained to follow a dominant pathway, and $\beta$ proteins are somewhere in between. Thus we have shown how topological chain constraints can induce dominant pathway mechanisms in protein folding.
[ { "created": "Fri, 30 Nov 2012 07:15:22 GMT", "version": "v1" } ]
2015-06-12
[ [ "Mohazab", "Ali R.", "" ], [ "Plotkin", "Steven S.", "" ] ]
We introduce a method for calculating the extent to which chain non-crossing is important in the most efficient, optimal trajectories or pathways for a protein to fold. This involves recording all unphysical crossing events of a ghost chain, and calculating the minimal uncrossing cost that would have been required to avoid such events. A depth-first tree search algorithm is applied to find minimal transformations to fold $\alpha$, $\beta$, $\alpha/\beta$, and knotted proteins. In all cases, the extra uncrossing/non-crossing distance is a small fraction of the total distance travelled by a ghost chain. Different structural classes may be distinguished by the amount of extra uncrossing distance, and the effectiveness of such discrimination is compared with other order parameters. It was seen that non-crossing distance over chain length provided the best discrimination between structural and kinetic classes. The scaling of non-crossing distance with chain length implies an inevitable crossover to entanglement-dominated folding mechanisms for sufficiently long chains. We further quantify the minimal folding pathways by collecting the sequence of uncrossing moves, which generally involve leg, loop, and elbow-like uncrossing moves, and rendering the collection of these moves over the unfolded ensemble as a multiple-transformation "alignment". The consensus minimal pathway is constructed and shown schematically for representative cases of an $\alpha$, $\beta$, and knotted protein. An overlap parameter is defined between pathways; we find that $\alpha$ proteins have minimal overlap indicating diverse folding pathways, knotted proteins are highly constrained to follow a dominant pathway, and $\beta$ proteins are somewhere in between. Thus we have shown how topological chain constraints can induce dominant pathway mechanisms in protein folding.
2405.18343
Emilio Mendiola
Emilio A. Mendiola, Raza Rana Mehdi, Dipan J. Shah, Reza Avazmohammadi
On in-silico estimation of left ventricular end-diastolic pressure from cardiac strains
null
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Left ventricular diastolic dysfunction (LVDD) is a group of diseases that adversely affect the passive phase of the cardiac cycle and can lead to heart failure. While left ventricular end-diastolic pressure (LVEDP) is a valuable prognostic measure in LVDD patients, traditional invasive methods of measuring LVEDP present risks and limitations, highlighting the need for alternative approaches. This paper investigates the possibility of measuring LVEDP non-invasively using inverse in-silico modeling. We propose the adoption of patient-specific cardiac modeling and simulation to estimate LVEDP and myocardial stiffness from cardiac strains. We have developed a high-fidelity patient-specific computational model of the left ventricle. Through an inverse modeling approach, myocardial stiffness and LVEDP were accurately estimated from cardiac strains that can be acquired from in vivo imaging, indicating the feasibility of computational modeling to augment current approaches in the measurement of ventricular pressure. Integration of such computational platforms into clinical practice holds promise for early detection and comprehensive assessment of LVDD with reduced risk for patients.
[ { "created": "Tue, 28 May 2024 16:41:21 GMT", "version": "v1" } ]
2024-05-29
[ [ "Mendiola", "Emilio A.", "" ], [ "Mehdi", "Raza Rana", "" ], [ "Shah", "Dipan J.", "" ], [ "Avazmohammadi", "Reza", "" ] ]
Left ventricular diastolic dysfunction (LVDD) is a group of diseases that adversely affect the passive phase of the cardiac cycle and can lead to heart failure. While left ventricular end-diastolic pressure (LVEDP) is a valuable prognostic measure in LVDD patients, traditional invasive methods of measuring LVEDP present risks and limitations, highlighting the need for alternative approaches. This paper investigates the possibility of measuring LVEDP non-invasively using inverse in-silico modeling. We propose the adoption of patient-specific cardiac modeling and simulation to estimate LVEDP and myocardial stiffness from cardiac strains. We have developed a high-fidelity patient-specific computational model of the left ventricle. Through an inverse modeling approach, myocardial stiffness and LVEDP were accurately estimated from cardiac strains that can be acquired from in vivo imaging, indicating the feasibility of computational modeling to augment current approaches in the measurement of ventricular pressure. Integration of such computational platforms into clinical practice holds promise for early detection and comprehensive assessment of LVDD with reduced risk for patients.
1410.5123
Giovanni Punzi
Maria Michela Del Viva, Giovanni Punzi
The brain as a trigger system
Presented by M. Del Viva at the Conference "Technology and Instrumentation in Particle Physics 2014" (TIPP 2014), June 2-6, 2014, Amsterdam, The Netherlands
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There are significant analogies between the issues related to real-time event selection in HEP, and the issues faced by the human visual system. In fact, the visual system needs to extract rapidly the most important elements of the external world from a large flux of information, for survival purposes. A rapid and reliable detection of visual stimuli is essential for triggering autonomic responses to emotive stimuli, for initiating adaptive behaviors and for orienting towards potentially interesting/ dangerous stimuli. The speed of visual processing can be as fast as 20 ms, about only 20 times the duration of the elementary information exchanges by the action potential. The limitations to the brain capacity to process visual information, imposed by intrinsic energetic costs of neuronal activity, and ecological limits to the size of the skull, require a strong data reduction at an early stage, by creating a compact summary of relevant information, the so called "primal sketch", to be handled by further levels of processing. This is quite similar to the problem of experimental HEP of providing fast data reduction at a reasonable monetary cost, and with a practical device size. As a result of a joint effort of HEP physicists and practicing vision scientists, we recently proposed that not only the problems are similar, but the solutions adopted in the two cases also have strong similarities, and their parallel study can actually shed light on each other. Modeling the visual system as a trigger processor leads to a deeper understanding, and even very specific predictions of its functionality. Conversely, the insights gained from this new approach to vision, can lead to new ideas for enhancing the capabilities of artificial vision systems, and HEP trigger systems as well.
[ { "created": "Sun, 19 Oct 2014 23:03:57 GMT", "version": "v1" } ]
2014-10-21
[ [ "Del Viva", "Maria Michela", "" ], [ "Punzi", "Giovanni", "" ] ]
There are significant analogies between the issues related to real-time event selection in HEP, and the issues faced by the human visual system. In fact, the visual system needs to extract rapidly the most important elements of the external world from a large flux of information, for survival purposes. A rapid and reliable detection of visual stimuli is essential for triggering autonomic responses to emotive stimuli, for initiating adaptive behaviors and for orienting towards potentially interesting/ dangerous stimuli. The speed of visual processing can be as fast as 20 ms, about only 20 times the duration of the elementary information exchanges by the action potential. The limitations to the brain capacity to process visual information, imposed by intrinsic energetic costs of neuronal activity, and ecological limits to the size of the skull, require a strong data reduction at an early stage, by creating a compact summary of relevant information, the so called "primal sketch", to be handled by further levels of processing. This is quite similar to the problem of experimental HEP of providing fast data reduction at a reasonable monetary cost, and with a practical device size. As a result of a joint effort of HEP physicists and practicing vision scientists, we recently proposed that not only the problems are similar, but the solutions adopted in the two cases also have strong similarities, and their parallel study can actually shed light on each other. Modeling the visual system as a trigger processor leads to a deeper understanding, and even very specific predictions of its functionality. Conversely, the insights gained from this new approach to vision, can lead to new ideas for enhancing the capabilities of artificial vision systems, and HEP trigger systems as well.
1411.6772
Erwan Bigan
Erwan Bigan, St\'ephane Douady and Jean-Marc Steyaert
On necessary and sufficient conditions for proto-cell stationary growth
Fifth International Workshop on Static Analysis and Systems Biology (SASB 2014), Munich, Sept 10, 2014. To be published in Electronic Notes in Theoretical Computer Science
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a generic proto-cell model consisting of any conservative chemical reaction network embedded within a membrane. The membrane results from the self-assembly of one of the chemical species (membrane precursor) and is semi-permeable to some other chemical species (nutrients) diffusing from an outside growth medium into the proto-cell. Inside the proto-cell, nutrients are metabolized into all other chemical species including the membrane precursor, and the membrane grows in area and the proto-cell in volume. Investigating the conditions under which such a proto-cell may reach stationary growth, we prove that a simple necessary condition is that each moiety be fed with some nutrient flux; and that a sufficient condition for the existence of a stationary growth regime is that every siphon containing any species participating in the membrane precursor incorporation kinetics also contains the support of a moiety that is fed with some nutrient flux. These necessary and sufficient conditions hold regardless of chemical reaction kinetics, membrane parameters or nutrient flux diffusion characteristics.
[ { "created": "Tue, 25 Nov 2014 09:01:17 GMT", "version": "v1" } ]
2014-11-26
[ [ "Bigan", "Erwan", "" ], [ "Douady", "Stéphane", "" ], [ "Steyaert", "Jean-Marc", "" ] ]
We consider a generic proto-cell model consisting of any conservative chemical reaction network embedded within a membrane. The membrane results from the self-assembly of one of the chemical species (membrane precursor) and is semi-permeable to some other chemical species (nutrients) diffusing from an outside growth medium into the proto-cell. Inside the proto-cell, nutrients are metabolized into all other chemical species including the membrane precursor, and the membrane grows in area and the proto-cell in volume. Investigating the conditions under which such a proto-cell may reach stationary growth, we prove that a simple necessary condition is that each moiety be fed with some nutrient flux; and that a sufficient condition for the existence of a stationary growth regime is that every siphon containing any species participating in the membrane precursor incorporation kinetics also contains the support of a moiety that is fed with some nutrient flux. These necessary and sufficient conditions hold regardless of chemical reaction kinetics, membrane parameters or nutrient flux diffusion characteristics.
2006.05034
Erica Graham
E. J. Graham, N. Elhadad, D. Albers
Reduced model for female endocrine dynamics: Validation and functional variations
null
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A normally functioning menstrual cycle requires significant crosstalk between hormones originating in ovarian and brain tissues. Reproductive hormone dysregulation may cause abnormal function and sometimes infertility. The inherent complexity in this endocrine system is a challenge to identifying mechanisms of cycle disruption, particularly given the large number of unknown parameters in existing mathematical models. We develop a new endocrine model to limit model complexity and use simulated distributions of unknown parameters for model analysis. By employing a comprehensive model evaluation, we identify a collection of mechanisms that differentiate normal and abnormal phenotypes. We also discover an intermediate phenotype--displaying relatively normal hormone levels and cycle dynamics--that is grouped statistically with the irregular phenotype. Results provide insight into how clinical symptoms associated with ovulatory disruption may not be detected through hormone measurements alone.
[ { "created": "Tue, 9 Jun 2020 03:34:38 GMT", "version": "v1" }, { "created": "Sun, 28 Aug 2022 14:46:53 GMT", "version": "v2" } ]
2022-08-30
[ [ "Graham", "E. J.", "" ], [ "Elhadad", "N.", "" ], [ "Albers", "D.", "" ] ]
A normally functioning menstrual cycle requires significant crosstalk between hormones originating in ovarian and brain tissues. Reproductive hormone dysregulation may cause abnormal function and sometimes infertility. The inherent complexity in this endocrine system is a challenge to identifying mechanisms of cycle disruption, particularly given the large number of unknown parameters in existing mathematical models. We develop a new endocrine model to limit model complexity and use simulated distributions of unknown parameters for model analysis. By employing a comprehensive model evaluation, we identify a collection of mechanisms that differentiate normal and abnormal phenotypes. We also discover an intermediate phenotype--displaying relatively normal hormone levels and cycle dynamics--that is grouped statistically with the irregular phenotype. Results provide insight into how clinical symptoms associated with ovulatory disruption may not be detected through hormone measurements alone.