id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1609.04636 | Erik van Oort | Erik S.B. van Oort, Maarten Mennes, Tobias Navarro Schr\"oder, Vinod
J. Kumar, Nestor I. Zaragoza Jimenez, Wolfgang Grodd, Christian F. Doeller,
Christian F. Beckmann | Human brain parcellation using time courses of instantaneous
connectivity | null | null | null | null | q-bio.QM q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Functional neuroimaging studies have lead to understanding the brain as a
collection of spatially segregated functional networks. It is thought that each
of these networks is in turn composed of a set of distinct sub-regions that
together support each network's function. Considering the sub-regions to be an
essential part of the brain's functional architecture, several strategies have
been put forward that aim at identifying the functional sub-units of the brain
by means of functional parcellations. Current parcellation strategies typically
employ a bottom-up strategy, creating a parcellation by clustering smaller
units. We propose a novel top-down parcellation strategy, using time courses of
instantaneous connectivity to subdivide an initial region of interest into
sub-regions. We use split-half reproducibility to choose the optimal number of
sub-regions. We apply our Instantaneous Connectivity Parcellation (ICP)
strategy on high-quality resting-state FMRI data, and demonstrate the ability
to generate parcellations for thalamus, entorhinal cortex, motor cortex, and
subcortex including brainstem and striatum. We evaluate the subdivisions
against available cytoarchitecture maps to show that the our parcellation
strategy recovers biologically valid subdivisions that adhere to known
cytoarchitectural features.
| [
{
"created": "Thu, 15 Sep 2016 13:32:56 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Nov 2016 12:34:17 GMT",
"version": "v2"
}
] | 2016-11-22 | [
[
"van Oort",
"Erik S. B.",
""
],
[
"Mennes",
"Maarten",
""
],
[
"Schröder",
"Tobias Navarro",
""
],
[
"Kumar",
"Vinod J.",
""
],
[
"Jimenez",
"Nestor I. Zaragoza",
""
],
[
"Grodd",
"Wolfgang",
""
],
[
"Doeller",
"Christian F.",
""
],
[
"Beckmann",
"Christian F.",
""
]
] | Functional neuroimaging studies have lead to understanding the brain as a collection of spatially segregated functional networks. It is thought that each of these networks is in turn composed of a set of distinct sub-regions that together support each network's function. Considering the sub-regions to be an essential part of the brain's functional architecture, several strategies have been put forward that aim at identifying the functional sub-units of the brain by means of functional parcellations. Current parcellation strategies typically employ a bottom-up strategy, creating a parcellation by clustering smaller units. We propose a novel top-down parcellation strategy, using time courses of instantaneous connectivity to subdivide an initial region of interest into sub-regions. We use split-half reproducibility to choose the optimal number of sub-regions. We apply our Instantaneous Connectivity Parcellation (ICP) strategy on high-quality resting-state FMRI data, and demonstrate the ability to generate parcellations for thalamus, entorhinal cortex, motor cortex, and subcortex including brainstem and striatum. We evaluate the subdivisions against available cytoarchitecture maps to show that the our parcellation strategy recovers biologically valid subdivisions that adhere to known cytoarchitectural features. |
2402.00024 | Shaghayegh Sadeghi | Shaghayegh Sadeghi, Alan Bui, Ali Forooghi, Jianguo Lu, Alioune Ngom | Can Large Language Models Understand Molecules? | null | null | null | null | q-bio.BM cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Purpose: Large Language Models (LLMs) like GPT (Generative Pre-trained
Transformer) from OpenAI and LLaMA (Large Language Model Meta AI) from Meta AI
are increasingly recognized for their potential in the field of
cheminformatics, particularly in understanding Simplified Molecular Input Line
Entry System (SMILES), a standard method for representing chemical structures.
These LLMs also have the ability to decode SMILES strings into vector
representations.
Method: We investigate the performance of GPT and LLaMA compared to
pre-trained models on SMILES in embedding SMILES strings on downstream tasks,
focusing on two key applications: molecular property prediction and drug-drug
interaction prediction.
Results: We find that SMILES embeddings generated using LLaMA outperform
those from GPT in both molecular property and DDI prediction tasks. Notably,
LLaMA-based SMILES embeddings show results comparable to pre-trained models on
SMILES in molecular prediction tasks and outperform the pre-trained models for
the DDI prediction tasks.
Conclusion: The performance of LLMs in generating SMILES embeddings shows
great potential for further investigation of these models for molecular
embedding. We hope our study bridges the gap between LLMs and molecular
embedding, motivating additional research into the potential of LLMs in the
molecular representation field. GitHub:
https://github.com/sshaghayeghs/LLaMA-VS-GPT
| [
{
"created": "Fri, 5 Jan 2024 18:31:34 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Feb 2024 18:24:51 GMT",
"version": "v2"
},
{
"created": "Tue, 21 May 2024 03:40:19 GMT",
"version": "v3"
}
] | 2024-05-22 | [
[
"Sadeghi",
"Shaghayegh",
""
],
[
"Bui",
"Alan",
""
],
[
"Forooghi",
"Ali",
""
],
[
"Lu",
"Jianguo",
""
],
[
"Ngom",
"Alioune",
""
]
] | Purpose: Large Language Models (LLMs) like GPT (Generative Pre-trained Transformer) from OpenAI and LLaMA (Large Language Model Meta AI) from Meta AI are increasingly recognized for their potential in the field of cheminformatics, particularly in understanding Simplified Molecular Input Line Entry System (SMILES), a standard method for representing chemical structures. These LLMs also have the ability to decode SMILES strings into vector representations. Method: We investigate the performance of GPT and LLaMA compared to pre-trained models on SMILES in embedding SMILES strings on downstream tasks, focusing on two key applications: molecular property prediction and drug-drug interaction prediction. Results: We find that SMILES embeddings generated using LLaMA outperform those from GPT in both molecular property and DDI prediction tasks. Notably, LLaMA-based SMILES embeddings show results comparable to pre-trained models on SMILES in molecular prediction tasks and outperform the pre-trained models for the DDI prediction tasks. Conclusion: The performance of LLMs in generating SMILES embeddings shows great potential for further investigation of these models for molecular embedding. We hope our study bridges the gap between LLMs and molecular embedding, motivating additional research into the potential of LLMs in the molecular representation field. GitHub: https://github.com/sshaghayeghs/LLaMA-VS-GPT |
1612.00168 | Adam Martin-Schwarze | Adam Martin-Schwarze, Jarad Niemi, and Philip Dixon | Assessing the impacts of time to detection distribution assumptions on
detection probability estimation | Expands previous simulation study and adds simulations across a range
of `true' detection values | JABES (2017) 22: 465-480 | 10.1007/s13253-017-0300-y | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Abundance estimates from animal point-count surveys require accurate
estimates of detection probabilities. The standard model for estimating
detection from removal-sampled point-count surveys assumes that organisms at a
survey site are detected at a constant rate; however, this assumption is often
not justified. We consider a class of N-mixture models that allows for
detection heterogeneity over time through a flexibly defined time-to-detection
distribution (TTDD) and allows for fixed and random effects for both abundance
and detection. Our model is thus a combination of survival time-to-event
analysis with unknown-N, unknown-p abundance estimation. We specifically
explore two-parameter families of TTDDs, e.g. gamma, that can additionally
include a mixture component to model increased probability of detection in the
initial observation period. We find that modeling a TTDD by using a
two-parameter family is necessary when data have a chance of arising from a
distribution of this nature. In addition, models with a mixture component can
outperform non-mixture models even when the truth is non-mixture. Finally, we
analyze an Overbird data set from the Chippewa National Forest using mixed
effect models for both abundance and detection. We demonstrate that the effects
of explanatory variables on abundance and detection are consistent across
mixture TTDDs but that flexible TTDDs result in lower estimated probabilities
of detection and therefore higher estimates of abundance.
| [
{
"created": "Thu, 1 Dec 2016 08:04:07 GMT",
"version": "v1"
},
{
"created": "Fri, 19 May 2017 19:05:13 GMT",
"version": "v2"
}
] | 2019-04-09 | [
[
"Martin-Schwarze",
"Adam",
""
],
[
"Niemi",
"Jarad",
""
],
[
"Dixon",
"Philip",
""
]
] | Abundance estimates from animal point-count surveys require accurate estimates of detection probabilities. The standard model for estimating detection from removal-sampled point-count surveys assumes that organisms at a survey site are detected at a constant rate; however, this assumption is often not justified. We consider a class of N-mixture models that allows for detection heterogeneity over time through a flexibly defined time-to-detection distribution (TTDD) and allows for fixed and random effects for both abundance and detection. Our model is thus a combination of survival time-to-event analysis with unknown-N, unknown-p abundance estimation. We specifically explore two-parameter families of TTDDs, e.g. gamma, that can additionally include a mixture component to model increased probability of detection in the initial observation period. We find that modeling a TTDD by using a two-parameter family is necessary when data have a chance of arising from a distribution of this nature. In addition, models with a mixture component can outperform non-mixture models even when the truth is non-mixture. Finally, we analyze an Overbird data set from the Chippewa National Forest using mixed effect models for both abundance and detection. We demonstrate that the effects of explanatory variables on abundance and detection are consistent across mixture TTDDs but that flexible TTDDs result in lower estimated probabilities of detection and therefore higher estimates of abundance. |
2312.07286 | Pedro Carelli | Rafael M. Jungmann, Tha\'is Feliciano, Leandro A. A. Aguiar, Carina
Soares-Cunha, B\'arbara Coimbra, Ana Jo\~ao Rodrigues, Mauro Copelli,
Fernanda S. Matias, Nivaldo A. P. de Vasconcelos, Pedro V. Carelli | State-dependent complexity of the local field potential in the primary
visual cortex | null | null | null | null | q-bio.NC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The local field potential (LFP) is as a measure of the combined activity of
neurons within a region of brain tissue. While biophysical modeling schemes for
LFP in cortical circuits are well established, there is a paramount lack of
understanding regarding the LFP properties along the states assumed in cortical
circuits over long periods. Here we use a symbolic information approach to
determine the statistical complexity based on Jensen disequilibrium measure and
Shannon entropy of LFP data recorded from the primary visual cortex (V1) of
urethane-anesthetized rats and freely moving mice. Using these information
quantifiers, we find consistent relations between LFP recordings and measures
of cortical states at the neuronal level. More specifically, we show that LFP's
statistical complexity is sensitive to cortical state (characterized by spiking
variability), as well as to cortical layer. In addition, we apply these
quantifiers to characterize behavioral states of freely moving mice, where we
find indirect relations between such states and spiking variability.
| [
{
"created": "Tue, 12 Dec 2023 14:00:52 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Dec 2023 14:21:01 GMT",
"version": "v2"
}
] | 2023-12-14 | [
[
"Jungmann",
"Rafael M.",
""
],
[
"Feliciano",
"Thaís",
""
],
[
"Aguiar",
"Leandro A. A.",
""
],
[
"Soares-Cunha",
"Carina",
""
],
[
"Coimbra",
"Bárbara",
""
],
[
"Rodrigues",
"Ana João",
""
],
[
"Copelli",
"Mauro",
""
],
[
"Matias",
"Fernanda S.",
""
],
[
"de Vasconcelos",
"Nivaldo A. P.",
""
],
[
"Carelli",
"Pedro V.",
""
]
] | The local field potential (LFP) is as a measure of the combined activity of neurons within a region of brain tissue. While biophysical modeling schemes for LFP in cortical circuits are well established, there is a paramount lack of understanding regarding the LFP properties along the states assumed in cortical circuits over long periods. Here we use a symbolic information approach to determine the statistical complexity based on Jensen disequilibrium measure and Shannon entropy of LFP data recorded from the primary visual cortex (V1) of urethane-anesthetized rats and freely moving mice. Using these information quantifiers, we find consistent relations between LFP recordings and measures of cortical states at the neuronal level. More specifically, we show that LFP's statistical complexity is sensitive to cortical state (characterized by spiking variability), as well as to cortical layer. In addition, we apply these quantifiers to characterize behavioral states of freely moving mice, where we find indirect relations between such states and spiking variability. |
2309.06540 | Natalie M. Isenberg | Natalie M. Isenberg, Susan D. Mertins, Byung-Jun Yoon, Kristofer
Reyes, Nathan M. Urban | Identifying Bayesian Optimal Experiments for Uncertain Biochemical
Pathway Models | null | null | null | null | q-bio.MN stat.AP stat.CO | http://creativecommons.org/licenses/by/4.0/ | Pharmacodynamic (PD) models are mathematical models of cellular reaction
networks that include drug mechanisms of action. These models are useful for
studying predictive therapeutic outcomes of novel drug therapies in silico.
However, PD models are known to possess significant uncertainty with respect to
constituent parameter data, leading to uncertainty in the model predictions.
Furthermore, experimental data to calibrate these models is often limited or
unavailable for novel pathways. In this study, we present a Bayesian optimal
experimental design approach for improving PD model prediction accuracy. We
then apply our method using simulated experimental data to account for
uncertainty in hypothetical laboratory measurements. This leads to a
probabilistic prediction of drug performance and a quantitative measure of
which prospective laboratory experiment will optimally reduce prediction
uncertainty in the PD model. The methods proposed here provide a way forward
for uncertainty quantification and guided experimental design for models of
novel biological pathways.
| [
{
"created": "Tue, 12 Sep 2023 19:28:26 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Sep 2023 17:55:33 GMT",
"version": "v2"
}
] | 2023-09-27 | [
[
"Isenberg",
"Natalie M.",
""
],
[
"Mertins",
"Susan D.",
""
],
[
"Yoon",
"Byung-Jun",
""
],
[
"Reyes",
"Kristofer",
""
],
[
"Urban",
"Nathan M.",
""
]
] | Pharmacodynamic (PD) models are mathematical models of cellular reaction networks that include drug mechanisms of action. These models are useful for studying predictive therapeutic outcomes of novel drug therapies in silico. However, PD models are known to possess significant uncertainty with respect to constituent parameter data, leading to uncertainty in the model predictions. Furthermore, experimental data to calibrate these models is often limited or unavailable for novel pathways. In this study, we present a Bayesian optimal experimental design approach for improving PD model prediction accuracy. We then apply our method using simulated experimental data to account for uncertainty in hypothetical laboratory measurements. This leads to a probabilistic prediction of drug performance and a quantitative measure of which prospective laboratory experiment will optimally reduce prediction uncertainty in the PD model. The methods proposed here provide a way forward for uncertainty quantification and guided experimental design for models of novel biological pathways. |
1307.1337 | Bernard Ycart | Bernard Ycart and Fr\'ed\'eric Pont and Jean-Jacques Fourni\'e | Statistical data mining for symbol associations in genomic databases | null | null | null | null | q-bio.GN q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A methodology is proposed to automatically detect significant symbol
associations in genomic databases. A new statistical test is proposed to assess
the significance of a group of symbols when found in several genesets of a
given database. Applied to symbol pairs, the thresholded p-values of the test
define a graph structure on the set of symbols. The cliques of that graph are
significant symbol associations, linked to a set of genesets where they can be
found. The method can be applied to any database, and is illustrated MSigDB C2
database. Many of the symbol associations detected in C2 or in non-specific
selections did correspond to already known interactions. On more specific
selections of C2, many previously unkown symbol associations have been
detected. These associations unveal new candidates for gene or protein
interactions, needing further investigation for biological evidence.
| [
{
"created": "Thu, 4 Jul 2013 14:10:11 GMT",
"version": "v1"
},
{
"created": "Tue, 10 Sep 2013 15:24:46 GMT",
"version": "v2"
}
] | 2013-09-11 | [
[
"Ycart",
"Bernard",
""
],
[
"Pont",
"Frédéric",
""
],
[
"Fournié",
"Jean-Jacques",
""
]
] | A methodology is proposed to automatically detect significant symbol associations in genomic databases. A new statistical test is proposed to assess the significance of a group of symbols when found in several genesets of a given database. Applied to symbol pairs, the thresholded p-values of the test define a graph structure on the set of symbols. The cliques of that graph are significant symbol associations, linked to a set of genesets where they can be found. The method can be applied to any database, and is illustrated MSigDB C2 database. Many of the symbol associations detected in C2 or in non-specific selections did correspond to already known interactions. On more specific selections of C2, many previously unkown symbol associations have been detected. These associations unveal new candidates for gene or protein interactions, needing further investigation for biological evidence. |
1809.10504 | Alexander Ecker | Alexander S. Ecker, Fabian H. Sinz, Emmanouil Froudarakis, Paul G.
Fahey, Santiago A. Cadena, Edgar Y. Walker, Erick Cobos, Jacob Reimer,
Andreas S. Tolias, Matthias Bethge | A rotation-equivariant convolutional neural network model of primary
visual cortex | null | null | null | null | q-bio.NC cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Classical models describe primary visual cortex (V1) as a filter bank of
orientation-selective linear-nonlinear (LN) or energy models, but these models
fail to predict neural responses to natural stimuli accurately. Recent work
shows that models based on convolutional neural networks (CNNs) lead to much
more accurate predictions, but it remains unclear which features are extracted
by V1 neurons beyond orientation selectivity and phase invariance. Here we work
towards systematically studying V1 computations by categorizing neurons into
groups that perform similar computations. We present a framework to identify
common features independent of individual neurons' orientation selectivity by
using a rotation-equivariant convolutional neural network, which automatically
extracts every feature at multiple different orientations. We fit this model to
responses of a population of 6000 neurons to natural images recorded in mouse
primary visual cortex using two-photon imaging. We show that our
rotation-equivariant network not only outperforms a regular CNN with the same
number of feature maps, but also reveals a number of common features shared by
many V1 neurons, which deviate from the typical textbook idea of V1 as a bank
of Gabor filters. Our findings are a first step towards a powerful new tool to
study the nonlinear computations in V1.
| [
{
"created": "Thu, 27 Sep 2018 13:16:37 GMT",
"version": "v1"
}
] | 2018-09-28 | [
[
"Ecker",
"Alexander S.",
""
],
[
"Sinz",
"Fabian H.",
""
],
[
"Froudarakis",
"Emmanouil",
""
],
[
"Fahey",
"Paul G.",
""
],
[
"Cadena",
"Santiago A.",
""
],
[
"Walker",
"Edgar Y.",
""
],
[
"Cobos",
"Erick",
""
],
[
"Reimer",
"Jacob",
""
],
[
"Tolias",
"Andreas S.",
""
],
[
"Bethge",
"Matthias",
""
]
] | Classical models describe primary visual cortex (V1) as a filter bank of orientation-selective linear-nonlinear (LN) or energy models, but these models fail to predict neural responses to natural stimuli accurately. Recent work shows that models based on convolutional neural networks (CNNs) lead to much more accurate predictions, but it remains unclear which features are extracted by V1 neurons beyond orientation selectivity and phase invariance. Here we work towards systematically studying V1 computations by categorizing neurons into groups that perform similar computations. We present a framework to identify common features independent of individual neurons' orientation selectivity by using a rotation-equivariant convolutional neural network, which automatically extracts every feature at multiple different orientations. We fit this model to responses of a population of 6000 neurons to natural images recorded in mouse primary visual cortex using two-photon imaging. We show that our rotation-equivariant network not only outperforms a regular CNN with the same number of feature maps, but also reveals a number of common features shared by many V1 neurons, which deviate from the typical textbook idea of V1 as a bank of Gabor filters. Our findings are a first step towards a powerful new tool to study the nonlinear computations in V1. |
2308.05118 | Daniel Um | Daniel H. Um, David A. Knowles, Gail E. Kaiser | Vector Embeddings by Sequence Similarity and Context for Improved
Compression, Similarity Search, Clustering, Organization, and Manipulation of
cDNA Libraries | 15 pages, 8 figures | null | null | null | q-bio.GN cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper demonstrates the utility of organized numerical representations of
genes in research involving flat string gene formats (i.e., FASTA/FASTQ5).
FASTA/FASTQ files have several current limitations, such as their large file
sizes, slow processing speeds for mapping and alignment, and contextual
dependencies. These challenges significantly hinder investigations and tasks
that involve finding similar sequences. The solution lies in transforming
sequences into an alternative representation that facilitates easier clustering
into similar groups compared to the raw sequences themselves. By assigning a
unique vector embedding to each short sequence, it is possible to more
efficiently cluster and improve upon compression performance for the string
representations of cDNA libraries. Furthermore, through learning alternative
coordinate vector embeddings based on the contexts of codon triplets, we can
demonstrate clustering based on amino acid properties. Finally, using this
sequence embedding method to encode barcodes and cDNA sequences, we can improve
the time complexity of the similarity search by coupling vector embeddings with
an algorithm that determines the proximity of vectors in Euclidean space; this
allows us to perform sequence similarity searches in a quicker and more modular
fashion.
| [
{
"created": "Tue, 8 Aug 2023 17:31:17 GMT",
"version": "v1"
}
] | 2023-08-11 | [
[
"Um",
"Daniel H.",
""
],
[
"Knowles",
"David A.",
""
],
[
"Kaiser",
"Gail E.",
""
]
] | This paper demonstrates the utility of organized numerical representations of genes in research involving flat string gene formats (i.e., FASTA/FASTQ5). FASTA/FASTQ files have several current limitations, such as their large file sizes, slow processing speeds for mapping and alignment, and contextual dependencies. These challenges significantly hinder investigations and tasks that involve finding similar sequences. The solution lies in transforming sequences into an alternative representation that facilitates easier clustering into similar groups compared to the raw sequences themselves. By assigning a unique vector embedding to each short sequence, it is possible to more efficiently cluster and improve upon compression performance for the string representations of cDNA libraries. Furthermore, through learning alternative coordinate vector embeddings based on the contexts of codon triplets, we can demonstrate clustering based on amino acid properties. Finally, using this sequence embedding method to encode barcodes and cDNA sequences, we can improve the time complexity of the similarity search by coupling vector embeddings with an algorithm that determines the proximity of vectors in Euclidean space; this allows us to perform sequence similarity searches in a quicker and more modular fashion. |
1809.01051 | Matthias Hennig | Matthias H. Hennig, Cole Hurwitz and Martino Sorbaro | Scaling Spike Detection and Sorting for Next Generation
Electrophysiology | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reliable spike detection and sorting, the process of assigning each detected
spike to its originating neuron, is an essential step in the analysis of
extracellular electrical recordings from neurons. The volume and complexity of
the data from recently developed large scale, high density microelectrode
arrays and probes, which allow recording from thousands of channels
simultaneously, substantially complicate this task conceptually and
computationally. This chapter provides a summary and discussion of recently
developed methods to tackle these challenges, and discuss the important aspect
of algorithm validation, and assessment of detection and sorting quality.
| [
{
"created": "Tue, 4 Sep 2018 15:42:00 GMT",
"version": "v1"
}
] | 2018-09-05 | [
[
"Hennig",
"Matthias H.",
""
],
[
"Hurwitz",
"Cole",
""
],
[
"Sorbaro",
"Martino",
""
]
] | Reliable spike detection and sorting, the process of assigning each detected spike to its originating neuron, is an essential step in the analysis of extracellular electrical recordings from neurons. The volume and complexity of the data from recently developed large scale, high density microelectrode arrays and probes, which allow recording from thousands of channels simultaneously, substantially complicate this task conceptually and computationally. This chapter provides a summary and discussion of recently developed methods to tackle these challenges, and discuss the important aspect of algorithm validation, and assessment of detection and sorting quality. |
2012.09948 | Lucas Flores | Lucas S. Flores and Heitor C. M. Fernandes and Marco A. Amaral and
Mendeli H. Vainstein | Symbiotic behaviour in the Public Goods game with altruistic punishment | null | null | 10.1016/j.jtbi.2021.110737 | null | q-bio.PE cond-mat.stat-mech physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | Finding ways to overcome the temptation to exploit one another is still a
challenge in behavioural sciences. In the framework of evolutionary game
theory, punishing strategies are frequently used to promote cooperation in
competitive environments. Here, we introduce altruistic punishers in the
spatial public goods game. This strategy acts as a cooperator in the absence of
defectors, otherwise it will punish all defectors in their vicinity while
bearing a cost to do so. We observe three distinct behaviours in our model: i)
in the absence of punishers, cooperators (who don't punish defectors) are
driven to extinction by defectors for most parameter values; ii) clusters of
punishers thrive by sharing the punishment costs when these are low iii) for
higher punishment costs, punishers, when alone, are subject to exploitation but
in the presence of cooperators can form a symbiotic spatial structure that
benefits both. This last observation is our main finding since neither
cooperation nor punishment alone can survive the defector strategy in this
parameter region and the specificity of the symbiotic spatial configuration
shows that lattice topology plays a central role in sustaining cooperation.
Results were obtained by means of Monte Carlo simulations on a square lattice
and subsequently confirmed by a pairwise comparison of different strategies'
payoffs in diverse group compositions, leading to a phase diagram of the
possible states.
| [
{
"created": "Thu, 17 Dec 2020 21:55:56 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Apr 2021 16:57:13 GMT",
"version": "v2"
}
] | 2022-07-06 | [
[
"Flores",
"Lucas S.",
""
],
[
"Fernandes",
"Heitor C. M.",
""
],
[
"Amaral",
"Marco A.",
""
],
[
"Vainstein",
"Mendeli H.",
""
]
] | Finding ways to overcome the temptation to exploit one another is still a challenge in behavioural sciences. In the framework of evolutionary game theory, punishing strategies are frequently used to promote cooperation in competitive environments. Here, we introduce altruistic punishers in the spatial public goods game. This strategy acts as a cooperator in the absence of defectors, otherwise it will punish all defectors in their vicinity while bearing a cost to do so. We observe three distinct behaviours in our model: i) in the absence of punishers, cooperators (who don't punish defectors) are driven to extinction by defectors for most parameter values; ii) clusters of punishers thrive by sharing the punishment costs when these are low iii) for higher punishment costs, punishers, when alone, are subject to exploitation but in the presence of cooperators can form a symbiotic spatial structure that benefits both. This last observation is our main finding since neither cooperation nor punishment alone can survive the defector strategy in this parameter region and the specificity of the symbiotic spatial configuration shows that lattice topology plays a central role in sustaining cooperation. Results were obtained by means of Monte Carlo simulations on a square lattice and subsequently confirmed by a pairwise comparison of different strategies' payoffs in diverse group compositions, leading to a phase diagram of the possible states. |
1302.3753 | Bart Haegeman | Bart Haegeman, J\'er\^ome Hamelin, John Moriarty, Peter Neal, Jonathan
Dushoff, Joshua S. Weitz | Robust estimation of microbial diversity in theory and in practice | To be published in The ISME Journal. Main text: 16 pages, 5 figures.
Supplement: 16 pages, 4 figures | ISME J. 7, 1092--1101 (2013) | 10.1038/ismej.2013.10 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantifying diversity is of central importance for the study of structure,
function and evolution of microbial communities. The estimation of microbial
diversity has received renewed attention with the advent of large-scale
metagenomic studies. Here, we consider what the diversity observed in a sample
tells us about the diversity of the community being sampled. First, we argue
that one cannot reliably estimate the absolute and relative number of microbial
species present in a community without making unsupported assumptions about
species abundance distributions. The reason for this is that sample data do not
contain information about the number of rare species in the tail of species
abundance distributions. We illustrate the difficulty in comparing species
richness estimates by applying Chao's estimator of species richness to a set of
in silico communities: they are ranked incorrectly in the presence of large
numbers of rare species. Next, we extend our analysis to a general family of
diversity metrics ("Hill diversities"), and construct lower and upper estimates
of diversity values consistent with the sample data. The theory generalizes
Chao's estimator, which we retrieve as the lower estimate of species richness.
We show that Shannon and Simpson diversity can be robustly estimated for the in
silico communities. We analyze nine metagenomic data sets from a wide range of
environments, and show that our findings are relevant for empirically-sampled
communities. Hence, we recommend the use of Shannon and Simpson diversity
rather than species richness in efforts to quantify and compare microbial
diversity.
| [
{
"created": "Fri, 15 Feb 2013 13:53:59 GMT",
"version": "v1"
},
{
"created": "Sat, 23 Feb 2013 07:51:23 GMT",
"version": "v2"
}
] | 2013-08-06 | [
[
"Haegeman",
"Bart",
""
],
[
"Hamelin",
"Jérôme",
""
],
[
"Moriarty",
"John",
""
],
[
"Neal",
"Peter",
""
],
[
"Dushoff",
"Jonathan",
""
],
[
"Weitz",
"Joshua S.",
""
]
] | Quantifying diversity is of central importance for the study of structure, function and evolution of microbial communities. The estimation of microbial diversity has received renewed attention with the advent of large-scale metagenomic studies. Here, we consider what the diversity observed in a sample tells us about the diversity of the community being sampled. First, we argue that one cannot reliably estimate the absolute and relative number of microbial species present in a community without making unsupported assumptions about species abundance distributions. The reason for this is that sample data do not contain information about the number of rare species in the tail of species abundance distributions. We illustrate the difficulty in comparing species richness estimates by applying Chao's estimator of species richness to a set of in silico communities: they are ranked incorrectly in the presence of large numbers of rare species. Next, we extend our analysis to a general family of diversity metrics ("Hill diversities"), and construct lower and upper estimates of diversity values consistent with the sample data. The theory generalizes Chao's estimator, which we retrieve as the lower estimate of species richness. We show that Shannon and Simpson diversity can be robustly estimated for the in silico communities. We analyze nine metagenomic data sets from a wide range of environments, and show that our findings are relevant for empirically-sampled communities. Hence, we recommend the use of Shannon and Simpson diversity rather than species richness in efforts to quantify and compare microbial diversity. |
2304.00256 | Suman Chakraborty | Suman Chakraborty and Sagar Chakraborty | Selection-recombination-mutation dynamics: Gradient, limit cycle, and
closed invariant curve | 10 pages, 1 figure | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, the replicator dynamics of the two-locus two-allele system
under weak mutation and weak selection is investigated in a generation-wise
non-overlapping unstructured population of individuals mating at random. Our
main finding is that the dynamics is gradient-like when the point mutations at
the two loci are independent. This is in stark contrast to the case of
one-locus multi-allele where the existence gradient behaviour is contingent on
a specific relationship between the mutation rates. When the mutations are not
independent in the two-locus two-allele system, there is the possibility of
non-convergent outcomes, like asymptotically stable oscillations, through
either the Hopf bifurcation or the Neimark--Sacker bifurcation depending on the
strength of the weak selection. The results can be straightforwardly extended
for multi-locus two-allele systems.
| [
{
"created": "Sat, 1 Apr 2023 08:13:15 GMT",
"version": "v1"
}
] | 2023-04-04 | [
[
"Chakraborty",
"Suman",
""
],
[
"Chakraborty",
"Sagar",
""
]
] | In this paper, the replicator dynamics of the two-locus two-allele system under weak mutation and weak selection is investigated in a generation-wise non-overlapping unstructured population of individuals mating at random. Our main finding is that the dynamics is gradient-like when the point mutations at the two loci are independent. This is in stark contrast to the case of one-locus multi-allele where the existence gradient behaviour is contingent on a specific relationship between the mutation rates. When the mutations are not independent in the two-locus two-allele system, there is the possibility of non-convergent outcomes, like asymptotically stable oscillations, through either the Hopf bifurcation or the Neimark--Sacker bifurcation depending on the strength of the weak selection. The results can be straightforwardly extended for multi-locus two-allele systems. |
0903.1012 | Nail Khusnutdinov Mr | F. Gafarov, N. Khusnutdinov, and F. Galimyanov | The simulation of the activity dependent neural network growth | 10 pages, 2 figures | null | 10.1142/S0219635209002058 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is currently accepted that cortical maps are dynamic constructions that
are altered in response to external input. Experience-dependent structural
changes in cortical microcurcuts lead to changes of activity, i.e. to changes
in information encoded. Specific patterns of external stimulation can lead to
creation of new synaptic connections between neurons. The calcium influxes
controlled by neuronal activity regulate the processes of neurotrophic factors
released by neurons, growth cones movement and synapse differentiation in
developing neural systems. We propose a model for description and investigation
of the activity dependent development of neural networks. The dynamics of the
network parameters (activity, diffusion of axon guidance chemicals, growth cone
position) is described by a closed set of differential equations. The model
presented here describes the development of neural networks under the
assumption of activity dependent axon guidance molecules. Numerical simulation
shows that morpholess neurons compromise the development of cortical
connectivity.
| [
{
"created": "Thu, 5 Mar 2009 14:48:10 GMT",
"version": "v1"
}
] | 2013-02-26 | [
[
"Gafarov",
"F.",
""
],
[
"Khusnutdinov",
"N.",
""
],
[
"Galimyanov",
"F.",
""
]
] | It is currently accepted that cortical maps are dynamic constructions that are altered in response to external input. Experience-dependent structural changes in cortical microcurcuts lead to changes of activity, i.e. to changes in information encoded. Specific patterns of external stimulation can lead to creation of new synaptic connections between neurons. The calcium influxes controlled by neuronal activity regulate the processes of neurotrophic factors released by neurons, growth cones movement and synapse differentiation in developing neural systems. We propose a model for description and investigation of the activity dependent development of neural networks. The dynamics of the network parameters (activity, diffusion of axon guidance chemicals, growth cone position) is described by a closed set of differential equations. The model presented here describes the development of neural networks under the assumption of activity dependent axon guidance molecules. Numerical simulation shows that morpholess neurons compromise the development of cortical connectivity. |
1901.06105 | A. K. M. Azad | A. K. M. Azad, Fatemeh Vafaee | Single cell data explosion: Deep learning to the rescue | null | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The plethora of single-cell multi-omics data is getting treatment with deep
learning, a revolutionary method in artificial intelligence, which has been
increasingly expanding its reign over the bioscience frontiers.
| [
{
"created": "Fri, 18 Jan 2019 07:05:56 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Mar 2019 02:22:51 GMT",
"version": "v2"
}
] | 2019-03-12 | [
[
"Azad",
"A. K. M.",
""
],
[
"Vafaee",
"Fatemeh",
""
]
] | The plethora of single-cell multi-omics data is getting treatment with deep learning, a revolutionary method in artificial intelligence, which has been increasingly expanding its reign over the bioscience frontiers. |
q-bio/0412017 | Jesus Gomez-Gardenes | Luis A. Campos, Santiago Cuesta-Lopez, Jon Lopez-Llano, Fernando Falo,
Javier Sancho (BIFI-Universidad de Zaragoza) | A double-deletion method to quantifying incremental binding energies in
proteins from experiment. Example of a destabilizing hydrogen bonding pair | 41 pages, To appear in Biophysical Journal (in press) | null | 10.1529/biophysj.104.050203 | null | q-bio.BM q-bio.QM | null | The contribution of a specific hydrogen bond in apoflavodoxin to protein
stability is investigated by combining theory, experiment and simulation.
Although hydrogen bonds are major determinants of protein structure and
function, their contribution to protein stability is still unclear and widely
debated. The best method so far devised to estimate the contribution of
side-chain interactions to protein stability is double-mutant-cycle analysis,
but the interaction energies so derived are not identical to incremental
binding energies (the energies quantifying net contributions of two interacting
groups to protein stability). Here we introduce double-deletion analysis of
isolated residue pairs as a means to precisely quantify incremental binding.
The method is exemplified by studying a surface-exposed hydrogen bond in a
model protein (Asp96/Asn128 in apoflavodoxin). Combined substitution of these
residues by alanines slightly destabilizes the protein, due to a decrease in
hydrophobic surface burial. Subtraction of this effect, however, clearly
indicates that the hydrogen-bonded groups in fact destabilize the native
conformation. In addition, Molecular Dynamics simulations and classic
double-mutant-cycle analysis explain quantitatively that, due to frustration,
the hydrogen bond must form in the native structure because, when the two
groups get approximated upon folding their binding becomes favorable. We would
like to remark two facts: that this is the first time the contribution of a
specific hydrogen bond to protein stability has been measured from experiment,
and that more hydrogen bonds need to be analyzed in order to draw general
conclusions on protein hydrogen bonds energetics. To that end, the double
deletion method should be of help.
| [
{
"created": "Thu, 9 Dec 2004 18:48:49 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Campos",
"Luis A.",
"",
"BIFI-Universidad de Zaragoza"
],
[
"Cuesta-Lopez",
"Santiago",
"",
"BIFI-Universidad de Zaragoza"
],
[
"Lopez-Llano",
"Jon",
"",
"BIFI-Universidad de Zaragoza"
],
[
"Falo",
"Fernando",
"",
"BIFI-Universidad de Zaragoza"
],
[
"Sancho",
"Javier",
"",
"BIFI-Universidad de Zaragoza"
]
] | The contribution of a specific hydrogen bond in apoflavodoxin to protein stability is investigated by combining theory, experiment and simulation. Although hydrogen bonds are major determinants of protein structure and function, their contribution to protein stability is still unclear and widely debated. The best method so far devised to estimate the contribution of side-chain interactions to protein stability is double-mutant-cycle analysis, but the interaction energies so derived are not identical to incremental binding energies (the energies quantifying net contributions of two interacting groups to protein stability). Here we introduce double-deletion analysis of isolated residue pairs as a means to precisely quantify incremental binding. The method is exemplified by studying a surface-exposed hydrogen bond in a model protein (Asp96/Asn128 in apoflavodoxin). Combined substitution of these residues by alanines slightly destabilizes the protein, due to a decrease in hydrophobic surface burial. Subtraction of this effect, however, clearly indicates that the hydrogen-bonded groups in fact destabilize the native conformation. In addition, Molecular Dynamics simulations and classic double-mutant-cycle analysis explain quantitatively that, due to frustration, the hydrogen bond must form in the native structure because, when the two groups get approximated upon folding their binding becomes favorable. We would like to remark two facts: that this is the first time the contribution of a specific hydrogen bond to protein stability has been measured from experiment, and that more hydrogen bonds need to be analyzed in order to draw general conclusions on protein hydrogen bonds energetics. To that end, the double deletion method should be of help. |
2002.11532 | Arnaud Gissot | S\'ebastien Benizri (ARNA), Arnaud Gissot (ARNA), Andrew Martin, Brune
Vialet (ARNA), Mark Grinstaff (BU), Philippe Barth\'el\'emy (ARNA) | Bioconjugated oligonucleotides: recent developments and thera-eutic
applications | null | Bioconjugate Chemistry, American Chemical Society, 2019, 30,
pp.366-383 | 10.1021/acs.bioconjchem.8b00761 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Oligonucleotide-based agents have the potential to treat or cure almost any
disease, and are one of the key therapeutic drug classes of the future.
Bioconjugated oligonucleotides, a subset of this class, are emerging from basic
research and being successfully translated to the clinic. In this review, we
first briefly describe two approaches for inhibiting specific genes using
oligonucleotides -- antisense DNA (ASO) and RNA interference (RNAi) -- followed
by a discussion on delivery to cells. We then summarize and analyze recent
developments in bioconjugated oligonucleotides including those possessing
GalNAc, cell penetrating peptides, $\alpha$-tocopherol, aptamers, antibodies,
cholesterol, squalene, fatty acids, or nucleolipids. These novel conjugates
provide a means to enhance tissue targeting, cell internalization, endosomal
escape, target binding specificity, resistance to nucleases, and more. We next
describe those bioconjugated oligonucleotides approved for patient use or in
clinical trials. Finally, we summarize the state of the field, describe current
limitations, and discuss future prospects. Biocon-jugation chemistry is at the
centerpiece of this therapeutic oligonucleotide revolution, and significant
opportunities exist for development of new modification chemistries, for
mechanistic studies at the chemical-biology interface, and for translating such
agents to the clinic.
| [
{
"created": "Wed, 26 Feb 2020 14:35:06 GMT",
"version": "v1"
}
] | 2020-02-27 | [
[
"Benizri",
"Sébastien",
"",
"ARNA"
],
[
"Gissot",
"Arnaud",
"",
"ARNA"
],
[
"Martin",
"Andrew",
"",
"ARNA"
],
[
"Vialet",
"Brune",
"",
"ARNA"
],
[
"Grinstaff",
"Mark",
"",
"BU"
],
[
"Barthélémy",
"Philippe",
"",
"ARNA"
]
] | Oligonucleotide-based agents have the potential to treat or cure almost any disease, and are one of the key therapeutic drug classes of the future. Bioconjugated oligonucleotides, a subset of this class, are emerging from basic research and being successfully translated to the clinic. In this review, we first briefly describe two approaches for inhibiting specific genes using oligonucleotides -- antisense DNA (ASO) and RNA interference (RNAi) -- followed by a discussion on delivery to cells. We then summarize and analyze recent developments in bioconjugated oligonucleotides including those possessing GalNAc, cell penetrating peptides, $\alpha$-tocopherol, aptamers, antibodies, cholesterol, squalene, fatty acids, or nucleolipids. These novel conjugates provide a means to enhance tissue targeting, cell internalization, endosomal escape, target binding specificity, resistance to nucleases, and more. We next describe those bioconjugated oligonucleotides approved for patient use or in clinical trials. Finally, we summarize the state of the field, describe current limitations, and discuss future prospects. Biocon-jugation chemistry is at the centerpiece of this therapeutic oligonucleotide revolution, and significant opportunities exist for development of new modification chemistries, for mechanistic studies at the chemical-biology interface, and for translating such agents to the clinic. |
2011.10372 | Umar Ahmad Dr. | Umar Ahmad, Buhari Ibrahim, Mustapha Mohammed, Ahmed Faris Aldoghachi,
Mahmood Usman, Abdulbasit Haliru Yakubu, Abubakar Sadiq Tanko, Khadijat
Abubakar Bobbo, Usman Adamu Garkuwa, Abdullahi Adamu Faggo, Sagir Mustapha,
Mahmoud Al-Masaeed, Syahril Abdullah, Yong Yoke Keong, Abhi Veerakumarasivam | Transcriptome profiling research in urothelial cell carcinoma | 47 pages, 2 figures, 2 Tables | null | null | null | q-bio.GN q-bio.QM q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | Urothelial cell carcinoma (UCC) is the ninth most common cancer that accounts
for 4.7% of all the new cancer cases globally. UCC development and progression
are due to complex and stochastic genetic programmes. To study the cascades of
molecular events underlying the poor prognosis that may lead to limited
treatment options for advanced disease and resistance to conventional therapies
in UCC, transcriptomics technology (RNA-Seq), a method of analysing the RNA
content of a sample using modern high-throughput sequencing platforms has been
employed. Here we review the principles of RNA-Seq technology and summarize
recent studies on human bladder cancer that employed this technique to unravel
the pathogenesis of the disease, identify biomarkers, discover pathways and
classify the disease state. We list the commonly used computational platforms
and software that are publicly available for RNA-Seq analysis. Moreover, we
discussed the future perspectives for RNA-Seq studies on bladder cancer and
recommend the application of new technology called single cell sequencing
(scRNA-Seq) to further understand the disease. Keywords: Transcriptome
profiling, RNA-sequencing, genomics, bioinformatics, bladder cancer
| [
{
"created": "Fri, 20 Nov 2020 12:22:50 GMT",
"version": "v1"
},
{
"created": "Mon, 23 Nov 2020 08:50:58 GMT",
"version": "v2"
},
{
"created": "Sun, 29 Nov 2020 00:16:38 GMT",
"version": "v3"
},
{
"created": "Sun, 31 Jan 2021 21:27:43 GMT",
"version": "v4"
}
] | 2021-02-02 | [
[
"Ahmad",
"Umar",
""
],
[
"Ibrahim",
"Buhari",
""
],
[
"Mohammed",
"Mustapha",
""
],
[
"Aldoghachi",
"Ahmed Faris",
""
],
[
"Usman",
"Mahmood",
""
],
[
"Yakubu",
"Abdulbasit Haliru",
""
],
[
"Tanko",
"Abubakar Sadiq",
""
],
[
"Bobbo",
"Khadijat Abubakar",
""
],
[
"Garkuwa",
"Usman Adamu",
""
],
[
"Faggo",
"Abdullahi Adamu",
""
],
[
"Mustapha",
"Sagir",
""
],
[
"Al-Masaeed",
"Mahmoud",
""
],
[
"Abdullah",
"Syahril",
""
],
[
"Keong",
"Yong Yoke",
""
],
[
"Veerakumarasivam",
"Abhi",
""
]
] | Urothelial cell carcinoma (UCC) is the ninth most common cancer that accounts for 4.7% of all the new cancer cases globally. UCC development and progression are due to complex and stochastic genetic programmes. To study the cascades of molecular events underlying the poor prognosis that may lead to limited treatment options for advanced disease and resistance to conventional therapies in UCC, transcriptomics technology (RNA-Seq), a method of analysing the RNA content of a sample using modern high-throughput sequencing platforms has been employed. Here we review the principles of RNA-Seq technology and summarize recent studies on human bladder cancer that employed this technique to unravel the pathogenesis of the disease, identify biomarkers, discover pathways and classify the disease state. We list the commonly used computational platforms and software that are publicly available for RNA-Seq analysis. Moreover, we discussed the future perspectives for RNA-Seq studies on bladder cancer and recommend the application of new technology called single cell sequencing (scRNA-Seq) to further understand the disease. Keywords: Transcriptome profiling, RNA-sequencing, genomics, bioinformatics, bladder cancer |
q-bio/0309018 | Robert C. Hilborn | Robert C. Hilborn and Rebecca J. Erwin | Coherence resonance in models of an excitable neuron with both fast and
slow dynamics | null | null | 10.1016/j.physleta.2003.12.040 | null | q-bio.NC cond-mat.stat-mech q-bio.QM | null | We demonstrate the existence of noise-induced periodicity (coherence
resonance) in both a discrete-time model and a continuous-time model of an
excitable neuron. In particular, we show that the effects of noise added to the
fast and slow dynamics of the models are dramatically different. A
Fokker-Planck analysis gives a quantitative explanation of the effects.
| [
{
"created": "Sat, 27 Sep 2003 18:44:25 GMT",
"version": "v1"
}
] | 2009-11-10 | [
[
"Hilborn",
"Robert C.",
""
],
[
"Erwin",
"Rebecca J.",
""
]
] | We demonstrate the existence of noise-induced periodicity (coherence resonance) in both a discrete-time model and a continuous-time model of an excitable neuron. In particular, we show that the effects of noise added to the fast and slow dynamics of the models are dramatically different. A Fokker-Planck analysis gives a quantitative explanation of the effects. |
1903.12199 | Petr Sulc | Fan Hong, Petr \v{S}ulc | Strand displacement: a fundamental mechanism in RNA biology? | null | null | null | null | q-bio.BM physics.bio-ph q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | DNA and RNA are generally regarded as central molecules in molecular biology.
Recent advancements in the field of DNA/RNA nanotechnology successfully used
DNA/RNA as programmable molecules to construct molecular machines and
nanostructures with predefined shapes and functions. The key mechanism for
dynamic control of the conformations of these DNA/RNA nanodevices is a reaction
called strand displacement, in which one strand in a formed duplex is replaced
by a third invading strand. While DNA/RNA strand displacement has mainly been
used to de-novo design molecular devices, we argue in this review that this
reaction is also likely to play a key role in multiple cellular events such as
gene recombination, CRISPR-based genome editing, and RNA cotranscriptional
folding. We introduce the general mechanism of strand displacement reaction,
give examples of its use in the construction of molecular machines, and finally
review natural processes having characteristic which suggest that strand
displacement is occurring.
| [
{
"created": "Thu, 28 Mar 2019 18:05:29 GMT",
"version": "v1"
}
] | 2019-04-01 | [
[
"Hong",
"Fan",
""
],
[
"Šulc",
"Petr",
""
]
] | DNA and RNA are generally regarded as central molecules in molecular biology. Recent advancements in the field of DNA/RNA nanotechnology successfully used DNA/RNA as programmable molecules to construct molecular machines and nanostructures with predefined shapes and functions. The key mechanism for dynamic control of the conformations of these DNA/RNA nanodevices is a reaction called strand displacement, in which one strand in a formed duplex is replaced by a third invading strand. While DNA/RNA strand displacement has mainly been used to de-novo design molecular devices, we argue in this review that this reaction is also likely to play a key role in multiple cellular events such as gene recombination, CRISPR-based genome editing, and RNA cotranscriptional folding. We introduce the general mechanism of strand displacement reaction, give examples of its use in the construction of molecular machines, and finally review natural processes having characteristic which suggest that strand displacement is occurring. |
1804.01722 | Feng Huang | Feng Huang, Xiaojie Chen, and Long Wang | Role of the effective payoff function in evolutionary game dynamics | This paper has been accepted to publish on EPL | EPL 124 40002 (2018) | 10.1209/0295-5075/124/40002 | null | q-bio.PE physics.bio-ph physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In most studies regarding evolutionary game dynamics, the effective payoff, a
quantity that translates the payoff derived from game interactions into
reproductive success, is usually assumed to be a specific function of the
payoff. Meanwhile, the effect of different function forms of effective payoff
on evolutionary dynamics is always left in the basket. With introducing a
generalized mapping that the effective payoff of individuals is a non-negative
function of two variables on selection intensity and payoff, we study how
different effective payoff functions affect evolutionary dynamics in a
symmetrical mutation-selection process. For standard two-strategy two-player
games, we find that under weak selection the condition for one strategy to
dominate the other depends not only on the classical {\sigma}-rule, but also on
an extra constant that is determined by the form of the effective payoff
function. By changing the sign of the constant, we can alter the direction of
strategy selection. Taking the Moran process and pairwise comparison process as
specific models in well-mixed populations, we find that different fitness or
imitation mappings are equivalent under weak selection. Moreover, the sign of
the extra constant determines the direction of one-third law and risk-dominance
for sufficiently large populations. This work thus helps to elucidate how the
effective payoff function as another fundamental ingredient of evolution affect
evolutionary dynamics.
| [
{
"created": "Thu, 5 Apr 2018 08:15:41 GMT",
"version": "v1"
},
{
"created": "Sun, 8 Jul 2018 02:28:44 GMT",
"version": "v2"
},
{
"created": "Tue, 7 Aug 2018 08:21:20 GMT",
"version": "v3"
},
{
"created": "Sat, 8 Dec 2018 05:21:21 GMT",
"version": "v4"
}
] | 2021-05-18 | [
[
"Huang",
"Feng",
""
],
[
"Chen",
"Xiaojie",
""
],
[
"Wang",
"Long",
""
]
] | In most studies regarding evolutionary game dynamics, the effective payoff, a quantity that translates the payoff derived from game interactions into reproductive success, is usually assumed to be a specific function of the payoff. Meanwhile, the effect of different function forms of effective payoff on evolutionary dynamics is always left in the basket. With introducing a generalized mapping that the effective payoff of individuals is a non-negative function of two variables on selection intensity and payoff, we study how different effective payoff functions affect evolutionary dynamics in a symmetrical mutation-selection process. For standard two-strategy two-player games, we find that under weak selection the condition for one strategy to dominate the other depends not only on the classical {\sigma}-rule, but also on an extra constant that is determined by the form of the effective payoff function. By changing the sign of the constant, we can alter the direction of strategy selection. Taking the Moran process and pairwise comparison process as specific models in well-mixed populations, we find that different fitness or imitation mappings are equivalent under weak selection. Moreover, the sign of the extra constant determines the direction of one-third law and risk-dominance for sufficiently large populations. This work thus helps to elucidate how the effective payoff function as another fundamental ingredient of evolution affect evolutionary dynamics. |
1207.3211 | Andrey Dovzhenok | A. Dovzhenok, A. S. Kuznetsov | Exploring Neuronal Bistability at the Depolarization Block | 26 pages, 8 figures, accepted to PLoS ONE | (2012) PLoS ONE 7(8): e42811 | 10.1371/journal.pone.0042811 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many neurons display bistability - coexistence of two firing modes such as
bursting and tonic spiking or tonic spiking and silence. Bistability has been
proposed to endow neurons with richer forms of information processing in
general and to be involved in short-term memory in particular by allowing a
brief signal to elicit long-lasting changes in firing. In this paper, we focus
on bistability that allows for a choice between tonic spiking and
depolarization block in a wide range of the depolarization levels. We consider
the spike-producing currents in two neurons, models of which differ by the
parameter values. Our dopaminergic neuron model displays bistability in a wide
range of applied currents at the depolarization block. The Hodgkin-Huxley model
of the squid giant axon shows no bistability. We varied parameter values for
the model to analyze transitions between the two parameter sets. We show that
bistability primarily characterizes the inactivation of the Na+ current. Our
study suggests a connection between the amount of the Na+ window current and
the length of the bistability range. For the dopaminergic neuron we hypothesize
that bistability can be linked to a prolonged action of antipsychotic drugs.
| [
{
"created": "Fri, 13 Jul 2012 11:59:57 GMT",
"version": "v1"
}
] | 2012-08-14 | [
[
"Dovzhenok",
"A.",
""
],
[
"Kuznetsov",
"A. S.",
""
]
] | Many neurons display bistability - coexistence of two firing modes such as bursting and tonic spiking or tonic spiking and silence. Bistability has been proposed to endow neurons with richer forms of information processing in general and to be involved in short-term memory in particular by allowing a brief signal to elicit long-lasting changes in firing. In this paper, we focus on bistability that allows for a choice between tonic spiking and depolarization block in a wide range of the depolarization levels. We consider the spike-producing currents in two neurons, models of which differ by the parameter values. Our dopaminergic neuron model displays bistability in a wide range of applied currents at the depolarization block. The Hodgkin-Huxley model of the squid giant axon shows no bistability. We varied parameter values for the model to analyze transitions between the two parameter sets. We show that bistability primarily characterizes the inactivation of the Na+ current. Our study suggests a connection between the amount of the Na+ window current and the length of the bistability range. For the dopaminergic neuron we hypothesize that bistability can be linked to a prolonged action of antipsychotic drugs. |
2209.11730 | Claudia Solis-Lemus | Samuel Ozminkowski, Yuke Wu, Liule Yang, Zhiwen Xu, Luke Selberg,
Chunrong Huang, Claudia Solis-Lemus | BioKlustering: a web app for semi-supervised learning of maximally
imbalanced genomic data | null | null | null | null | q-bio.GN stat.AP | http://creativecommons.org/licenses/by/4.0/ | Summary: Accurate phenotype prediction from genomic sequences is a highly
coveted task in biological and medical research. While machine-learning holds
the key to accurate prediction in a variety of fields, the complexity of
biological data can render many methodologies inapplicable. We introduce
BioKlustering, a user-friendly open-source and publicly available web app for
unsupervised and semi-supervised learning specialized for cases when sequence
alignment and/or experimental phenotyping of all classes are not possible.
Among its main advantages, BioKlustering 1) allows for maximally imbalanced
settings of partially observed labels including cases when only one class is
observed, which is currently prohibited in most semi-supervised methods, 2)
takes unaligned sequences as input and thus, allows learning for widely diverse
sequences (impossible to align) such as virus and bacteria, 3) is easy to use
for anyone with little or no programming expertise, and 4) works well with
small sample sizes.
Availability and Implementation: BioKlustering
(https://bioklustering.wid.wisc.edu) is a freely available web app implemented
with Django, a Python-based framework, with all major browsers supported. The
web app does not need any installation, and it is publicly available and
open-source (https://github.com/solislemuslab/bioklustering).
| [
{
"created": "Fri, 23 Sep 2022 17:23:59 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Sep 2022 21:18:42 GMT",
"version": "v2"
}
] | 2022-09-28 | [
[
"Ozminkowski",
"Samuel",
""
],
[
"Wu",
"Yuke",
""
],
[
"Yang",
"Liule",
""
],
[
"Xu",
"Zhiwen",
""
],
[
"Selberg",
"Luke",
""
],
[
"Huang",
"Chunrong",
""
],
[
"Solis-Lemus",
"Claudia",
""
]
] | Summary: Accurate phenotype prediction from genomic sequences is a highly coveted task in biological and medical research. While machine-learning holds the key to accurate prediction in a variety of fields, the complexity of biological data can render many methodologies inapplicable. We introduce BioKlustering, a user-friendly open-source and publicly available web app for unsupervised and semi-supervised learning specialized for cases when sequence alignment and/or experimental phenotyping of all classes are not possible. Among its main advantages, BioKlustering 1) allows for maximally imbalanced settings of partially observed labels including cases when only one class is observed, which is currently prohibited in most semi-supervised methods, 2) takes unaligned sequences as input and thus, allows learning for widely diverse sequences (impossible to align) such as virus and bacteria, 3) is easy to use for anyone with little or no programming expertise, and 4) works well with small sample sizes. Availability and Implementation: BioKlustering (https://bioklustering.wid.wisc.edu) is a freely available web app implemented with Django, a Python-based framework, with all major browsers supported. The web app does not need any installation, and it is publicly available and open-source (https://github.com/solislemuslab/bioklustering). |
1509.00171 | Yuri A. Dabaghian | A. Babichev, S. Cheng and Yu. Dabaghian | Topological schemas of cognitive maps and spatial learning in the
hippocampus | 17 pages, 11 figures, Frontiers in Computational Neuroscience, 2016 | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spatial navigation in mammals is based on building a mental representation of
their environment---a cognitive map. However, both the nature of this cognitive
map and its underpinning in neural structures and activity remains vague. A key
difficulty is that these maps are collective, emergent phenomena that cannot be
reduced to a simple combination of inputs provided by individual neurons. In
this paper we suggest computational frameworks for integrating the spiking
signals of individual cells into a spatial map, which we call schemas. We
provide examples of four schemas defined by different types of topological
relations that may be neurophysiologically encoded in the brain and demonstrate
that each schema provides its own large-scale characteristics of the
environment---the schema integrals. Moreover, we find that, in all cases, these
integrals are learned at a rate which is faster than the rate of complete
training of neural networks. Thus, the proposed schema framework differentiates
between the cognitive aspect of spatial learning and the physiological aspect
at the neural network level.
| [
{
"created": "Tue, 1 Sep 2015 08:13:59 GMT",
"version": "v1"
},
{
"created": "Sun, 20 Mar 2016 19:33:00 GMT",
"version": "v2"
}
] | 2016-03-22 | [
[
"Babichev",
"A.",
""
],
[
"Cheng",
"S.",
""
],
[
"Dabaghian",
"Yu.",
""
]
] | Spatial navigation in mammals is based on building a mental representation of their environment---a cognitive map. However, both the nature of this cognitive map and its underpinning in neural structures and activity remains vague. A key difficulty is that these maps are collective, emergent phenomena that cannot be reduced to a simple combination of inputs provided by individual neurons. In this paper we suggest computational frameworks for integrating the spiking signals of individual cells into a spatial map, which we call schemas. We provide examples of four schemas defined by different types of topological relations that may be neurophysiologically encoded in the brain and demonstrate that each schema provides its own large-scale characteristics of the environment---the schema integrals. Moreover, we find that, in all cases, these integrals are learned at a rate which is faster than the rate of complete training of neural networks. Thus, the proposed schema framework differentiates between the cognitive aspect of spatial learning and the physiological aspect at the neural network level. |
2311.08269 | Nodar Gogoberidze | Nodar Gogoberidze, Beth A. Cimini | Defining the boundaries: challenges and advances in identifying cells in
microscopy images | 12 pages, 1 figure, submitted to "Current Opinion in Biotechnology" | null | 10.1016/j.copbio.2023.103055 | null | q-bio.QM cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Segmentation, or the outlining of objects within images, is a critical step
in the measurement and analysis of cells within microscopy images. While
improvements continue to be made in tools that rely on classical methods for
segmentation, deep learning-based tools increasingly dominate advances in the
technology. Specialist models such as Cellpose continue to improve in accuracy
and user-friendliness, and segmentation challenges such as the Multi-Modality
Cell Segmentation Challenge continue to push innovation in accuracy across
widely-varying test data as well as efficiency and usability. Increased
attention on documentation, sharing, and evaluation standards are leading to
increased user-friendliness and acceleration towards the goal of a truly
universal method.
| [
{
"created": "Tue, 14 Nov 2023 16:02:18 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Nov 2023 17:18:44 GMT",
"version": "v2"
}
] | 2024-03-15 | [
[
"Gogoberidze",
"Nodar",
""
],
[
"Cimini",
"Beth A.",
""
]
] | Segmentation, or the outlining of objects within images, is a critical step in the measurement and analysis of cells within microscopy images. While improvements continue to be made in tools that rely on classical methods for segmentation, deep learning-based tools increasingly dominate advances in the technology. Specialist models such as Cellpose continue to improve in accuracy and user-friendliness, and segmentation challenges such as the Multi-Modality Cell Segmentation Challenge continue to push innovation in accuracy across widely-varying test data as well as efficiency and usability. Increased attention on documentation, sharing, and evaluation standards are leading to increased user-friendliness and acceleration towards the goal of a truly universal method. |
1309.0926 | Mike Steel Prof. | Olivier Gascuel and Mike Steel | Predicting the ancestral character changes in a tree is typically easier
than predicting the root state | 58 pages, 4 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting the ancestral sequences of a group of homologous sequences related
by a phylogenetic tree has been the subject of many studies, and numerous
methods have been proposed to this purpose. Theoretical results are available
that show that when the mutation rate become too large, reconstructing the
ancestral state at the tree root is no longer feasible. Here, we also study the
reconstruction of the ancestral changes that occurred along the tree edges. We
show that, depending on the tree and branch length distribution, reconstructing
these changes (i.e. reconstructing the ancestral state of all internal nodes in
the tree) may be easier or harder than reconstructing the ancestral root state.
However, results from information theory indicate that for the standard Yule
tree, the task of reconstructing internal node states remains feasible, even
for very high substitution rates. Moreover, computer simulations demonstrate
that for more complex trees and scenarios, this result still holds. For a large
variety of counting, parsimony-based and likelihood-based methods, the
predictive accuracy of a randomly selected internal node in the tree is indeed
much higher than the accuracy of the same method when applied to the tree root.
Moreover, parsimony- and likelihood-based methods appear to be remarkably
robust to sampling bias and model mis-specification.
| [
{
"created": "Wed, 4 Sep 2013 06:36:01 GMT",
"version": "v1"
}
] | 2013-09-05 | [
[
"Gascuel",
"Olivier",
""
],
[
"Steel",
"Mike",
""
]
] | Predicting the ancestral sequences of a group of homologous sequences related by a phylogenetic tree has been the subject of many studies, and numerous methods have been proposed to this purpose. Theoretical results are available that show that when the mutation rate become too large, reconstructing the ancestral state at the tree root is no longer feasible. Here, we also study the reconstruction of the ancestral changes that occurred along the tree edges. We show that, depending on the tree and branch length distribution, reconstructing these changes (i.e. reconstructing the ancestral state of all internal nodes in the tree) may be easier or harder than reconstructing the ancestral root state. However, results from information theory indicate that for the standard Yule tree, the task of reconstructing internal node states remains feasible, even for very high substitution rates. Moreover, computer simulations demonstrate that for more complex trees and scenarios, this result still holds. For a large variety of counting, parsimony-based and likelihood-based methods, the predictive accuracy of a randomly selected internal node in the tree is indeed much higher than the accuracy of the same method when applied to the tree root. Moreover, parsimony- and likelihood-based methods appear to be remarkably robust to sampling bias and model mis-specification. |
1612.03541 | Michael Lydeamore | M. Lydeamore, P. T. Campbell, D. G. Regan, S. Y. C. Tong, R. Andrews,
A. C. Steer, L. Romani, J. M. Kaldor, J. McVernon, J. M. McCaw | A biological model of scabies infection dynamics and treatment explains
why mass drug administration does not lead to elimination | 22 pages, 10 figures | null | 10.1016/j.mbs.2018.08.007 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite a low global prevalence, infections with Sarcoptes scabiei, or
scabies, are still common in remote communities such as in northern Australia
and the Solomon Islands. Mass drug administration (MDA) has been utilised in
these communities, and although prevalence drops substantially initially, these
reductions have not been sustained. We develop a compartmental model of scabies
infection dynamics and incorporate both ovicidal and non-ovicidal treatment
regimes. By including the dynamics of mass drug administration, we are able to
reproduce the phenomena of an initial reduction in prevalence, followed by the
recrudescence of infection levels in the population. We show that even under a
`perfect' two-round MDA, eradication of scabies under a non-ovicidal treatment
scheme is almost impossible. We then go on to consider how the probability of
elimination varies with the number of treatment rounds delivered in an MDA. We
find that even with infeasibly large numbers of treatment rounds, elimination
remains challenging.
| [
{
"created": "Mon, 12 Dec 2016 04:32:53 GMT",
"version": "v1"
}
] | 2018-11-26 | [
[
"Lydeamore",
"M.",
""
],
[
"Campbell",
"P. T.",
""
],
[
"Regan",
"D. G.",
""
],
[
"Tong",
"S. Y. C.",
""
],
[
"Andrews",
"R.",
""
],
[
"Steer",
"A. C.",
""
],
[
"Romani",
"L.",
""
],
[
"Kaldor",
"J. M.",
""
],
[
"McVernon",
"J.",
""
],
[
"McCaw",
"J. M.",
""
]
] | Despite a low global prevalence, infections with Sarcoptes scabiei, or scabies, are still common in remote communities such as in northern Australia and the Solomon Islands. Mass drug administration (MDA) has been utilised in these communities, and although prevalence drops substantially initially, these reductions have not been sustained. We develop a compartmental model of scabies infection dynamics and incorporate both ovicidal and non-ovicidal treatment regimes. By including the dynamics of mass drug administration, we are able to reproduce the phenomena of an initial reduction in prevalence, followed by the recrudescence of infection levels in the population. We show that even under a `perfect' two-round MDA, eradication of scabies under a non-ovicidal treatment scheme is almost impossible. We then go on to consider how the probability of elimination varies with the number of treatment rounds delivered in an MDA. We find that even with infeasibly large numbers of treatment rounds, elimination remains challenging. |
1407.3622 | Kunihiko Kaneko | Kunihiko Kaneko, Chikara Furusawa, and Tetsuya Yomo | Universal relationship in gene-expression changes for cells in
steady-growth state | 7 pages (5 figures) + 2 Supplementary pages (figures) | null | null | null | q-bio.CB q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cells adapt to different conditions by altering a vast number of components,
which is measurable using transcriptome analysis. Given that a cell undergoing
steady growth is constrained to sustain each of its internal components, the
abundance of all the components in the cell has to be roughly doubled during
each cell division event. From this steady-growth constraint, expression of all
genes is shown to change along a one-parameter curve in the state space in
response to the environmental stress. This leads to a global relationship that
governs the cellular state: By considering a relatively moderate change around
a steady state, logarithmic changes in expression are shown to be proportional
across all genes, upon alteration of stress strength, with the proportionality
coefficient given by the change in the growth rate of the cell. This theory is
confirmed by transcriptome analysis of Escherichia Coli in response to several
stresses.
| [
{
"created": "Mon, 14 Jul 2014 12:31:58 GMT",
"version": "v1"
}
] | 2014-07-15 | [
[
"Kaneko",
"Kunihiko",
""
],
[
"Furusawa",
"Chikara",
""
],
[
"Yomo",
"Tetsuya",
""
]
] | Cells adapt to different conditions by altering a vast number of components, which is measurable using transcriptome analysis. Given that a cell undergoing steady growth is constrained to sustain each of its internal components, the abundance of all the components in the cell has to be roughly doubled during each cell division event. From this steady-growth constraint, expression of all genes is shown to change along a one-parameter curve in the state space in response to the environmental stress. This leads to a global relationship that governs the cellular state: By considering a relatively moderate change around a steady state, logarithmic changes in expression are shown to be proportional across all genes, upon alteration of stress strength, with the proportionality coefficient given by the change in the growth rate of the cell. This theory is confirmed by transcriptome analysis of Escherichia Coli in response to several stresses. |
2303.04607 | Jer\'onimo Fotin\'os | Jer\'onimo Fotin\'os, Lucas Barberis, Carlos A. Condat | Effects of a Differentiating Therapy on Cancer-Stem-Cell-Driven Tumors | 21 pages, 10 figures (17 images) | null | null | null | q-bio.TO physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | The growth of many solid tumors has been found to be driven by chemo- and
radiotherapy-resistant cancer stem cells (CSCs). A suitable therapeutic avenue
in these cases may involve the use of a differentiating agent (DA) to force the
differentiation of the CSCs and of conventional therapies to eliminate the
remaining differentiated cancer cells (DCCs). To describe the effects of a DA
that reprograms CSCs into DCCs, we adapt a differential equation model
developed to investigate tumorspheres, which are assumed to consist of jointly
evolving CSC and DCC populations. We analyze the mathematical properties of the
model, finding the equilibria and their stability. We also present numerical
solutions and phase diagrams to describe the system evolution and the therapy
effects, denoting the DA strength by a parameter \(a_{dif}\).To obtain
realistic predictions, we choose the other model parameters to be those
determined previously from fits to various experimental datasets. These
datasets characterize the progression of the tumor under various culture
conditions. Typically, for small values of \(a_{dif}\) the tumor evolves
towards a final state that contains a CSC fraction, but a strong therapy leads
to the suppression of this phenotype. Nonetheless, different external
conditions lead to very diverse behaviors. For some environmental conditions,
the model predicts a threshold not only in the therapy strength, but also in
its starting time, an early beginning being potentially crucial. In summary,
our model shows how the effects of a DA depend critically not only on the
dosage and timing of the drug application, but also on the tumor nature and its
environment.
| [
{
"created": "Wed, 8 Mar 2023 14:19:55 GMT",
"version": "v1"
}
] | 2023-03-09 | [
[
"Fotinós",
"Jerónimo",
""
],
[
"Barberis",
"Lucas",
""
],
[
"Condat",
"Carlos A.",
""
]
] | The growth of many solid tumors has been found to be driven by chemo- and radiotherapy-resistant cancer stem cells (CSCs). A suitable therapeutic avenue in these cases may involve the use of a differentiating agent (DA) to force the differentiation of the CSCs and of conventional therapies to eliminate the remaining differentiated cancer cells (DCCs). To describe the effects of a DA that reprograms CSCs into DCCs, we adapt a differential equation model developed to investigate tumorspheres, which are assumed to consist of jointly evolving CSC and DCC populations. We analyze the mathematical properties of the model, finding the equilibria and their stability. We also present numerical solutions and phase diagrams to describe the system evolution and the therapy effects, denoting the DA strength by a parameter \(a_{dif}\).To obtain realistic predictions, we choose the other model parameters to be those determined previously from fits to various experimental datasets. These datasets characterize the progression of the tumor under various culture conditions. Typically, for small values of \(a_{dif}\) the tumor evolves towards a final state that contains a CSC fraction, but a strong therapy leads to the suppression of this phenotype. Nonetheless, different external conditions lead to very diverse behaviors. For some environmental conditions, the model predicts a threshold not only in the therapy strength, but also in its starting time, an early beginning being potentially crucial. In summary, our model shows how the effects of a DA depend critically not only on the dosage and timing of the drug application, but also on the tumor nature and its environment. |
2207.04869 | Zhichun Guo | Zhichun Guo, Kehan Guo, Bozhao Nan, Yijun Tian, Roshni G. Iyer, Yihong
Ma, Olaf Wiest, Xiangliang Zhang, Wei Wang, Chuxu Zhang, Nitesh V. Chawla | Graph-based Molecular Representation Learning | null | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Molecular representation learning (MRL) is a key step to build the connection
between machine learning and chemical science. In particular, it encodes
molecules as numerical vectors preserving the molecular structures and
features, on top of which the downstream tasks (e.g., property prediction) can
be performed. Recently, MRL has achieved considerable progress, especially in
methods based on deep molecular graph learning. In this survey, we
systematically review these graph-based molecular representation techniques,
especially the methods incorporating chemical domain knowledge. Specifically,
we first introduce the features of 2D and 3D molecular graphs. Then we
summarize and categorize MRL methods into three groups based on their input.
Furthermore, we discuss some typical chemical applications supported by MRL. To
facilitate studies in this fast-developing area, we also list the benchmarks
and commonly used datasets in the paper. Finally, we share our thoughts on
future research directions.
| [
{
"created": "Fri, 8 Jul 2022 17:43:20 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Apr 2023 13:28:06 GMT",
"version": "v2"
},
{
"created": "Wed, 29 Nov 2023 03:16:59 GMT",
"version": "v3"
}
] | 2023-11-30 | [
[
"Guo",
"Zhichun",
""
],
[
"Guo",
"Kehan",
""
],
[
"Nan",
"Bozhao",
""
],
[
"Tian",
"Yijun",
""
],
[
"Iyer",
"Roshni G.",
""
],
[
"Ma",
"Yihong",
""
],
[
"Wiest",
"Olaf",
""
],
[
"Zhang",
"Xiangliang",
""
],
[
"Wang",
"Wei",
""
],
[
"Zhang",
"Chuxu",
""
],
[
"Chawla",
"Nitesh V.",
""
]
] | Molecular representation learning (MRL) is a key step to build the connection between machine learning and chemical science. In particular, it encodes molecules as numerical vectors preserving the molecular structures and features, on top of which the downstream tasks (e.g., property prediction) can be performed. Recently, MRL has achieved considerable progress, especially in methods based on deep molecular graph learning. In this survey, we systematically review these graph-based molecular representation techniques, especially the methods incorporating chemical domain knowledge. Specifically, we first introduce the features of 2D and 3D molecular graphs. Then we summarize and categorize MRL methods into three groups based on their input. Furthermore, we discuss some typical chemical applications supported by MRL. To facilitate studies in this fast-developing area, we also list the benchmarks and commonly used datasets in the paper. Finally, we share our thoughts on future research directions. |
0901.0089 | Ewa Nurowska | Ewa Nurowska, Mykola Bratiichuk, Beata Dworakowska, Roman J. Nowak | A physical model of nicotinic ACh receptor kinetics | null | null | null | null | q-bio.NC q-bio.CB q-bio.QM q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a new approach to nicotinic receptor kinetics and a new model
explaining random variabilities in the duration of open events. The model gives
new interpretation on brief and long receptor openings and predicts (for two
identical binding sites) the presence of three components in the open time
distribution: two brief and a long. We also present the physical model of the
receptor block. This picture naturally and universally explains receptor
desensitization, the phenomenon of central importance in cellular signaling.
The model is based on single-channel experiments concerning the effects of
hydrocortisone (HC) on the kinetics of control wild-type (WT) and mutated
alphaD200Q mouse nicotinic acetylcholine receptors (nAChRs), expressed in HEK
293 cells. The appendix contains an original result from probability renewal
theory: a derivation of the probability distribution function for the duration
of a process performed by two independent servers.
| [
{
"created": "Wed, 31 Dec 2008 12:14:05 GMT",
"version": "v1"
}
] | 2009-01-07 | [
[
"Nurowska",
"Ewa",
""
],
[
"Bratiichuk",
"Mykola",
""
],
[
"Dworakowska",
"Beata",
""
],
[
"Nowak",
"Roman J.",
""
]
] | We present a new approach to nicotinic receptor kinetics and a new model explaining random variabilities in the duration of open events. The model gives new interpretation on brief and long receptor openings and predicts (for two identical binding sites) the presence of three components in the open time distribution: two brief and a long. We also present the physical model of the receptor block. This picture naturally and universally explains receptor desensitization, the phenomenon of central importance in cellular signaling. The model is based on single-channel experiments concerning the effects of hydrocortisone (HC) on the kinetics of control wild-type (WT) and mutated alphaD200Q mouse nicotinic acetylcholine receptors (nAChRs), expressed in HEK 293 cells. The appendix contains an original result from probability renewal theory: a derivation of the probability distribution function for the duration of a process performed by two independent servers. |
2305.01663 | Prayag Tiwari Dr. | Manish Bhatia, Balram Meena, Vipin Kumar Rathi, Prayag Tiwari, Amit
Kumar Jaiswal, Shagaf M Ansari, Ajay Kumar, Pekka Marttinen | A Novel Deep Learning based Model for Erythrocytes Classification and
Quantification in Sickle Cell Disease | null | null | null | null | q-bio.QM cs.LG eess.IV | http://creativecommons.org/licenses/by/4.0/ | The shape of erythrocytes or red blood cells is altered in several
pathological conditions. Therefore, identifying and quantifying different
erythrocyte shapes can help diagnose various diseases and assist in designing a
treatment strategy. Machine Learning (ML) can be efficiently used to identify
and quantify distorted erythrocyte morphologies. In this paper, we proposed a
customized deep convolutional neural network (CNN) model to classify and
quantify the distorted and normal morphology of erythrocytes from the images
taken from the blood samples of patients suffering from Sickle cell disease (
SCD). We chose SCD as a model disease condition due to the presence of diverse
erythrocyte morphologies in the blood samples of SCD patients. For the
analysis, we used 428 raw microscopic images of SCD blood samples and generated
the dataset consisting of 10, 377 single-cell images. We focused on three
well-defined erythrocyte shapes, including discocytes, oval, and sickle. We
used 18 layered deep CNN architecture to identify and quantify these shapes
with 81% accuracy, outperforming other models. We also used SHAP and LIME for
further interpretability. The proposed model can be helpful for the quick and
accurate analysis of SCD blood samples by the clinicians and help them make the
right decision for better management of SCD.
| [
{
"created": "Tue, 2 May 2023 10:28:07 GMT",
"version": "v1"
}
] | 2023-05-04 | [
[
"Bhatia",
"Manish",
""
],
[
"Meena",
"Balram",
""
],
[
"Rathi",
"Vipin Kumar",
""
],
[
"Tiwari",
"Prayag",
""
],
[
"Jaiswal",
"Amit Kumar",
""
],
[
"Ansari",
"Shagaf M",
""
],
[
"Kumar",
"Ajay",
""
],
[
"Marttinen",
"Pekka",
""
]
] | The shape of erythrocytes or red blood cells is altered in several pathological conditions. Therefore, identifying and quantifying different erythrocyte shapes can help diagnose various diseases and assist in designing a treatment strategy. Machine Learning (ML) can be efficiently used to identify and quantify distorted erythrocyte morphologies. In this paper, we proposed a customized deep convolutional neural network (CNN) model to classify and quantify the distorted and normal morphology of erythrocytes from the images taken from the blood samples of patients suffering from Sickle cell disease ( SCD). We chose SCD as a model disease condition due to the presence of diverse erythrocyte morphologies in the blood samples of SCD patients. For the analysis, we used 428 raw microscopic images of SCD blood samples and generated the dataset consisting of 10, 377 single-cell images. We focused on three well-defined erythrocyte shapes, including discocytes, oval, and sickle. We used 18 layered deep CNN architecture to identify and quantify these shapes with 81% accuracy, outperforming other models. We also used SHAP and LIME for further interpretability. The proposed model can be helpful for the quick and accurate analysis of SCD blood samples by the clinicians and help them make the right decision for better management of SCD. |
q-bio/0406049 | Muhittin Mungan | Muhittin Mungan, Alkan Kabakcioglu, Duygu Balcan, and Ayse Erzan | Analytical Solution of a Stochastic Content Based Network Model | 13 pages, 5 figures. Rewrote conclusions regarding the relevance to
gene regulation networks, fixed minor errors and replaced fig. 4. Main body
of paper (model and calculations) remains unchanged. Submitted for
publication | J. Phys. A: Math. Gen. 38 (2005) 9599--9620 | 10.1088/0305-4470/38/44/001 | null | q-bio.MN cond-mat.stat-mech q-bio.GN | null | We define and completely solve a content-based directed network whose nodes
consist of random words and an adjacency rule involving perfect or approximate
matches, for an alphabet with an arbitrary number of letters. The analytic
expression for the out-degree distribution shows a crossover from a leading
power law behavior to a log-periodic regime bounded by a different power law
decay. The leading exponents in the two regions have a weak dependence on the
mean word length, and an even weaker dependence on the alphabet size. The
in-degree distribution, on the other hand, is much narrower and does not show
scaling behavior. The results might be of interest for understanding the
emergence of genomic interaction networks, which rely, to a large extent, on
mechanisms based on sequence matching, and exhibit similar global features to
those found here.
| [
{
"created": "Fri, 25 Jun 2004 12:20:28 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Jul 2005 08:03:34 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Mungan",
"Muhittin",
""
],
[
"Kabakcioglu",
"Alkan",
""
],
[
"Balcan",
"Duygu",
""
],
[
"Erzan",
"Ayse",
""
]
] | We define and completely solve a content-based directed network whose nodes consist of random words and an adjacency rule involving perfect or approximate matches, for an alphabet with an arbitrary number of letters. The analytic expression for the out-degree distribution shows a crossover from a leading power law behavior to a log-periodic regime bounded by a different power law decay. The leading exponents in the two regions have a weak dependence on the mean word length, and an even weaker dependence on the alphabet size. The in-degree distribution, on the other hand, is much narrower and does not show scaling behavior. The results might be of interest for understanding the emergence of genomic interaction networks, which rely, to a large extent, on mechanisms based on sequence matching, and exhibit similar global features to those found here. |
2304.07178 | Jorge Carrasco Muriel | Jorge Carrasco Muriel, Nicholas Cowie, Marjan Mansouvar, Teddy Groves
and Lars Keld Nielsen | Shu: Visualization of high dimensional biological pathways | 3 pages, 1 figure, supplementary material at
https://github.com/biosustain/shu_case_studies | null | 10.1093/bioinformatics/btae140 | null | q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | Summary: Shu is a visualization tool that integrates diverse data types into
a metabolic map, with a focus on supporting multiple conditions and visualizing
distributions. The goal is to provide a unified platform for handling the
growing volume of multi-omics data, leveraging the metabolic maps developed by
the metabolic modeling community. Additionally, shu offers a streamlined python
API, based on the Grammar of Graphics, for easy integration with data
pipelines.
Availability and implementation: Freely available at
https://github.com/biosustain/shu under MIT/Apache 2.0 license. Binaries are
available in the release page of the repository and the web app is deployed at
https://biosustain.github.io/shu.
| [
{
"created": "Fri, 14 Apr 2023 14:53:45 GMT",
"version": "v1"
}
] | 2024-03-08 | [
[
"Muriel",
"Jorge Carrasco",
""
],
[
"Cowie",
"Nicholas",
""
],
[
"Mansouvar",
"Marjan",
""
],
[
"Groves",
"Teddy",
""
],
[
"Nielsen",
"Lars Keld",
""
]
] | Summary: Shu is a visualization tool that integrates diverse data types into a metabolic map, with a focus on supporting multiple conditions and visualizing distributions. The goal is to provide a unified platform for handling the growing volume of multi-omics data, leveraging the metabolic maps developed by the metabolic modeling community. Additionally, shu offers a streamlined python API, based on the Grammar of Graphics, for easy integration with data pipelines. Availability and implementation: Freely available at https://github.com/biosustain/shu under MIT/Apache 2.0 license. Binaries are available in the release page of the repository and the web app is deployed at https://biosustain.github.io/shu. |
2111.05902 | Nicholas Barendregt | Nicholas W. Barendregt and Peter J. Thomas | Heteroclinic cycling and extinction in May-Leonard models with
demographic stochasticity | 21 pages, 6 figures | null | null | null | q-bio.PE math.PR q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | May and Leonard (SIAM J. Appl. Math 1975) introduced a three-species
Lotka-Volterra type population model that exhibits heteroclinic cycling. Rather
than producing a periodic limit cycle, the trajectory takes longer and longer
to complete each "cycle", passing closer and closer to unstable fixed points in
which one population dominates and the others approach zero. Aperiodic
heteroclinic dynamics have subsequently been studied in ecological systems
(side-blotched lizards; colicinogenic E. coli), in the immune system, in neural
information processing models ("winnerless competition"), and in models of
neural central pattern generators. Yet as May and Leonard observed
"Biologically, the behavior (produced by the model) is nonsense. Once it is
conceded that the variables represent animals, and therefore cannot fall below
unity, it is clear that the system will, after a few cycles, converge on some
single population, extinguishing the other two." Here, we explore different
ways of introducing discrete stochastic dynamics based on May and Leonard's ODE
model, with application to ecological population dynamics, and to a neuromotor
central pattern generator system. We study examples of several quantitatively
distinct asymptotic behaviors, including total extinction of all species,
extinction to a single species, and persistent cyclic dominance with finite
mean cycle length.
| [
{
"created": "Wed, 10 Nov 2021 19:44:45 GMT",
"version": "v1"
}
] | 2021-11-12 | [
[
"Barendregt",
"Nicholas W.",
""
],
[
"Thomas",
"Peter J.",
""
]
] | May and Leonard (SIAM J. Appl. Math 1975) introduced a three-species Lotka-Volterra type population model that exhibits heteroclinic cycling. Rather than producing a periodic limit cycle, the trajectory takes longer and longer to complete each "cycle", passing closer and closer to unstable fixed points in which one population dominates and the others approach zero. Aperiodic heteroclinic dynamics have subsequently been studied in ecological systems (side-blotched lizards; colicinogenic E. coli), in the immune system, in neural information processing models ("winnerless competition"), and in models of neural central pattern generators. Yet as May and Leonard observed "Biologically, the behavior (produced by the model) is nonsense. Once it is conceded that the variables represent animals, and therefore cannot fall below unity, it is clear that the system will, after a few cycles, converge on some single population, extinguishing the other two." Here, we explore different ways of introducing discrete stochastic dynamics based on May and Leonard's ODE model, with application to ecological population dynamics, and to a neuromotor central pattern generator system. We study examples of several quantitatively distinct asymptotic behaviors, including total extinction of all species, extinction to a single species, and persistent cyclic dominance with finite mean cycle length. |
1407.8210 | Javad Noorbakhsh | Javad Noorbakhsh, David Schwab, Allyson Sgro, Thomas Gregor, Pankaj
Mehta | Multiscale modeling of oscillations and spiral waves in Dictyostelium
populations | null | Phys. Rev. E 91, 062711 (2015) | 10.1103/PhysRevE.91.062711 | null | q-bio.CB math-ph math.MP nlin.PS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unicellular organisms exhibit elaborate collective behaviors in response to
environmental cues. These behaviors are controlled by complex biochemical
networks within individual cells and coordinated through cell-to-cell
communication. Describing these behaviors requires new mathematical models that
can bridge scales -- from biochemical networks within individual cells to
spatially structured cellular populations. Here, we present a family of
multiscale models for the emergence of spiral waves in the social amoeba
Dictyostelium discoideum. Our models exploit new experimental advances that
allow for the direct measurement and manipulation of the small signaling
molecule cAMP used by Dictyostelium cells to coordinate behavior in cellular
populations. Inspired by recent experiments, we model the Dictyostelium
signaling network as an excitable system coupled to various pre-processing
modules. We use this family of models to study spatially unstructured
populations by constructing phase diagrams that relate the properties of
population-level oscillations to parameters in the underlying biochemical
network. We then extend our models to include spatial structure and show how
they naturally give rise to spiral waves. Our models exhibit a wide range of
novel phenomena including a density dependent frequency change, bistability,
and dynamic death due to slow cAMP dynamics. Our modeling approach provides a
powerful tool for bridging scales in modeling of Dictyostelium populations.
| [
{
"created": "Wed, 30 Jul 2014 20:42:56 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Sep 2014 20:32:56 GMT",
"version": "v2"
}
] | 2015-06-24 | [
[
"Noorbakhsh",
"Javad",
""
],
[
"Schwab",
"David",
""
],
[
"Sgro",
"Allyson",
""
],
[
"Gregor",
"Thomas",
""
],
[
"Mehta",
"Pankaj",
""
]
] | Unicellular organisms exhibit elaborate collective behaviors in response to environmental cues. These behaviors are controlled by complex biochemical networks within individual cells and coordinated through cell-to-cell communication. Describing these behaviors requires new mathematical models that can bridge scales -- from biochemical networks within individual cells to spatially structured cellular populations. Here, we present a family of multiscale models for the emergence of spiral waves in the social amoeba Dictyostelium discoideum. Our models exploit new experimental advances that allow for the direct measurement and manipulation of the small signaling molecule cAMP used by Dictyostelium cells to coordinate behavior in cellular populations. Inspired by recent experiments, we model the Dictyostelium signaling network as an excitable system coupled to various pre-processing modules. We use this family of models to study spatially unstructured populations by constructing phase diagrams that relate the properties of population-level oscillations to parameters in the underlying biochemical network. We then extend our models to include spatial structure and show how they naturally give rise to spiral waves. Our models exhibit a wide range of novel phenomena including a density dependent frequency change, bistability, and dynamic death due to slow cAMP dynamics. Our modeling approach provides a powerful tool for bridging scales in modeling of Dictyostelium populations. |
2407.07265 | Pranav Kantroo | Pranav Kantroo, G\"unter P. Wagner, Benjamin B. Machta | Pseudo-perplexity in One Fell Swoop for Protein Fitness Estimation | null | null | null | null | q-bio.BM q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | Protein language models trained on the masked language modeling objective
learn to predict the identity of hidden amino acid residues within a sequence
using the remaining observable sequence as context. They do so by embedding the
residues into a high dimensional space that encapsulates the relevant
contextual cues. These embedding vectors serve as an informative
context-sensitive representation that not only aids with the defined training
objective, but can also be used for other tasks by downstream models. We
propose a scheme to use the embeddings of an unmasked sequence to estimate the
corresponding masked probability vectors for all the positions in a single
forward pass through the language model. This One Fell Swoop (OFS) approach
allows us to efficiently estimate the pseudo-perplexity of the sequence, a
measure of the model's uncertainty in its predictions, that can also serve as a
fitness estimate. We find that ESM2 OFS pseudo-perplexity performs nearly as
well as the true pseudo-perplexity at fitness estimation, and more notably it
defines a new state of the art on the ProteinGym Indels benchmark. The strong
performance of the fitness measure prompted us to investigate if it could be
used to detect the elevated stability reported in reconstructed ancestral
sequences. We find that this measure ranks ancestral reconstructions as more
fit than extant sequences. Finally, we show that the computational efficiency
of the technique allows for the use of Monte Carlo methods that can rapidly
explore functional sequence space.
| [
{
"created": "Tue, 9 Jul 2024 22:46:08 GMT",
"version": "v1"
}
] | 2024-07-11 | [
[
"Kantroo",
"Pranav",
""
],
[
"Wagner",
"Günter P.",
""
],
[
"Machta",
"Benjamin B.",
""
]
] | Protein language models trained on the masked language modeling objective learn to predict the identity of hidden amino acid residues within a sequence using the remaining observable sequence as context. They do so by embedding the residues into a high dimensional space that encapsulates the relevant contextual cues. These embedding vectors serve as an informative context-sensitive representation that not only aids with the defined training objective, but can also be used for other tasks by downstream models. We propose a scheme to use the embeddings of an unmasked sequence to estimate the corresponding masked probability vectors for all the positions in a single forward pass through the language model. This One Fell Swoop (OFS) approach allows us to efficiently estimate the pseudo-perplexity of the sequence, a measure of the model's uncertainty in its predictions, that can also serve as a fitness estimate. We find that ESM2 OFS pseudo-perplexity performs nearly as well as the true pseudo-perplexity at fitness estimation, and more notably it defines a new state of the art on the ProteinGym Indels benchmark. The strong performance of the fitness measure prompted us to investigate if it could be used to detect the elevated stability reported in reconstructed ancestral sequences. We find that this measure ranks ancestral reconstructions as more fit than extant sequences. Finally, we show that the computational efficiency of the technique allows for the use of Monte Carlo methods that can rapidly explore functional sequence space. |
1710.03399 | Sael Lee | Vasundhara Dehiya, Jaya Thomas, Lee Sael | Prior Knowledge based mutation prioritization towards causal variant
finding in rare disease | 21 pages, 5 figures, submitted for journal publication in 2017 | null | null | null | q-bio.GN cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How do we determine the mutational effects in exome sequencing data with
little or no statistical evidence? Can protein structural information fill in
the gap of not having enough statistical evidence? In this work, we answer the
two questions with the goal towards determining pathogenic effects of rare
variants in rare disease. We take the approach of determining the importance of
point mutation loci focusing on protein structure features. The proposed
structure-based features contain information about geometric, physicochemical,
and functional information of mutation loci and those of structural neighbors
of the loci. The performance of the structure-based features trained on 80\% of
HumDiv and tested on 20\% of HumDiv and on ClinVar datasets showed high levels
of discernibility in the mutation's pathogenic or benign effects: F score of
0.71 and 0.68 respectively using multi-layer perceptron. Combining structure-
and sequence-based feature further improve the accuracy: F score of 0.86
(HumDiv) and 0.75 (ClinVar). Also, careful examination of the rare variants in
rare diseases cases showed that structure-based features are important in
discerning importance of variant loci.
| [
{
"created": "Tue, 10 Oct 2017 04:51:17 GMT",
"version": "v1"
}
] | 2017-10-11 | [
[
"Dehiya",
"Vasundhara",
""
],
[
"Thomas",
"Jaya",
""
],
[
"Sael",
"Lee",
""
]
] | How do we determine the mutational effects in exome sequencing data with little or no statistical evidence? Can protein structural information fill in the gap of not having enough statistical evidence? In this work, we answer the two questions with the goal towards determining pathogenic effects of rare variants in rare disease. We take the approach of determining the importance of point mutation loci focusing on protein structure features. The proposed structure-based features contain information about geometric, physicochemical, and functional information of mutation loci and those of structural neighbors of the loci. The performance of the structure-based features trained on 80\% of HumDiv and tested on 20\% of HumDiv and on ClinVar datasets showed high levels of discernibility in the mutation's pathogenic or benign effects: F score of 0.71 and 0.68 respectively using multi-layer perceptron. Combining structure- and sequence-based feature further improve the accuracy: F score of 0.86 (HumDiv) and 0.75 (ClinVar). Also, careful examination of the rare variants in rare diseases cases showed that structure-based features are important in discerning importance of variant loci. |
1609.01570 | Mette Olufsen | Renee Brady, Dennis O. Frank-Ito, Hien T. Tran, Susanne Janum, Kirsten
M{\o}ller, Susanne Brix, Johnny T. Ottesen, Jesper Mehlsen, Mette S. Olufsen | Personalized Mathematical Model Predicting Endotoxin-Induced
Inflammatory Responses in Young Men | Keywords: Cytokines, LPS, TNF-, IL-6, IL-10, CXCL8 | null | null | null | q-bio.QM q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The initial reaction of the body to pathogenic microbial infection or severe
tissue trauma is an acute inflammatory response. The magnitude of such a
response is of critical importance, since an uncontrolled response can cause
further tissue damage, sepsis, and ultimately death, while an insufficient
response can result in inadequate clearance of pathogens. A normal inflammatory
response helps to annihilate threats posed by microbial pathogenic ligands,
such as endotoxins, and thus, restore the body to a healthy state. Using a
personalized mathematical model, comprehension and a detailed description of
the interactions between pro- and anti-inflammatory cytokines can provide
important insight in the evaluation of a patient with sepsis or a susceptible
patient in surgery. Our model is calibrated to experimental data obtained from
experiments measuring pro-inflammatory cytokines (interleukin-6 (IL-6), tumor
necrosis factor (TNF-), and chemokine ligand-8 (CXCL8)) and the
anti-inflammatory cytokine interleukin-10 (IL-10) over 8 hours in 20 healthy
young male subjects, given a low dose intravenous injection of
lipopolysaccharide (LPS), resulting in endotoxin-stimulated inflammation.
Through the calibration process, we created a personalized mathematical model
that can accurately determine individual differences between subjects, as well
as identify those who showed an abnormal response.
| [
{
"created": "Tue, 6 Sep 2016 14:24:35 GMT",
"version": "v1"
}
] | 2016-09-07 | [
[
"Brady",
"Renee",
""
],
[
"Frank-Ito",
"Dennis O.",
""
],
[
"Tran",
"Hien T.",
""
],
[
"Janum",
"Susanne",
""
],
[
"Møller",
"Kirsten",
""
],
[
"Brix",
"Susanne",
""
],
[
"Ottesen",
"Johnny T.",
""
],
[
"Mehlsen",
"Jesper",
""
],
[
"Olufsen",
"Mette S.",
""
]
] | The initial reaction of the body to pathogenic microbial infection or severe tissue trauma is an acute inflammatory response. The magnitude of such a response is of critical importance, since an uncontrolled response can cause further tissue damage, sepsis, and ultimately death, while an insufficient response can result in inadequate clearance of pathogens. A normal inflammatory response helps to annihilate threats posed by microbial pathogenic ligands, such as endotoxins, and thus, restore the body to a healthy state. Using a personalized mathematical model, comprehension and a detailed description of the interactions between pro- and anti-inflammatory cytokines can provide important insight in the evaluation of a patient with sepsis or a susceptible patient in surgery. Our model is calibrated to experimental data obtained from experiments measuring pro-inflammatory cytokines (interleukin-6 (IL-6), tumor necrosis factor (TNF-), and chemokine ligand-8 (CXCL8)) and the anti-inflammatory cytokine interleukin-10 (IL-10) over 8 hours in 20 healthy young male subjects, given a low dose intravenous injection of lipopolysaccharide (LPS), resulting in endotoxin-stimulated inflammation. Through the calibration process, we created a personalized mathematical model that can accurately determine individual differences between subjects, as well as identify those who showed an abnormal response. |
1405.2419 | Mohsen Bakouri | Mohsen A. Bakouri | Sensorless Physiological Control of Implantable Rotary Blood Pumps for
Heart Failure Patients Using Modern Control Techniques | null | null | null | null | q-bio.QM cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sufferers of heart failure disease have a life expectancy of one year and
heart transplantation is usually the only guarantee of survival beyond this
period. The number of donor hearts available currently is less than 3,000 per
annum worldwide. Apart from the relatively fortunate people who receive donor
hearts for transplant, the only alternative for people with HF is the
implantation of rotary blood pump (IRBP). In fact, an IRBP with its continuous
operation requires a more complex controller to achieve basic physiological
requirements. The essential control requirement of an IRBP needs to mimic the
way that the heart pumps as much blood to the arterial circulation as it
receives from the venous circulation. This research aims to design, develop and
implement novel control strategies combining sensorless and non-invasive data
measurements to provide an adaptive and fairly robust preload sensitive
controller for IRBPs subjected to varying patient conditions, model
uncertainties and external disturbances. A sensorless pulastile flow estimator
was developed using collected data from animal experiments. Based on this
estimator, advanced physiological control algorithms for regulation of an IRBP
were developed to automatically adjust the pump speed to cater for changes in
the metabolic demand.The performance of the developed control algorithms are
assessed using a lumped parameter model of the CVS that was previously
developed using actual data from healthy pigs over a wide range of operating
conditions. Immediate responses of the controllers to short-term circulatory
changes as well as adaptive characteristics of the controllers in response to
long-term changes are examined in a parameter-optimised model of CVS - IRBP
interactions. Simulation results prove that the proposed controllers are fairly
robust against model uncertainties, parameter variations and external
disturbances.
| [
{
"created": "Sat, 10 May 2014 11:07:53 GMT",
"version": "v1"
}
] | 2014-05-13 | [
[
"Bakouri",
"Mohsen A.",
""
]
] | Sufferers of heart failure disease have a life expectancy of one year and heart transplantation is usually the only guarantee of survival beyond this period. The number of donor hearts available currently is less than 3,000 per annum worldwide. Apart from the relatively fortunate people who receive donor hearts for transplant, the only alternative for people with HF is the implantation of rotary blood pump (IRBP). In fact, an IRBP with its continuous operation requires a more complex controller to achieve basic physiological requirements. The essential control requirement of an IRBP needs to mimic the way that the heart pumps as much blood to the arterial circulation as it receives from the venous circulation. This research aims to design, develop and implement novel control strategies combining sensorless and non-invasive data measurements to provide an adaptive and fairly robust preload sensitive controller for IRBPs subjected to varying patient conditions, model uncertainties and external disturbances. A sensorless pulastile flow estimator was developed using collected data from animal experiments. Based on this estimator, advanced physiological control algorithms for regulation of an IRBP were developed to automatically adjust the pump speed to cater for changes in the metabolic demand.The performance of the developed control algorithms are assessed using a lumped parameter model of the CVS that was previously developed using actual data from healthy pigs over a wide range of operating conditions. Immediate responses of the controllers to short-term circulatory changes as well as adaptive characteristics of the controllers in response to long-term changes are examined in a parameter-optimised model of CVS - IRBP interactions. Simulation results prove that the proposed controllers are fairly robust against model uncertainties, parameter variations and external disturbances. |
1605.06309 | Fabiano L. Ribeiro | Fabiano L. Ribeiro | First Principles Attempt to Unify some Population Growth Models | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, some phenomenological models, those that are based only on the
population information (macroscopic level), are deduced in an intuitive way.
These models, as for instance Verhulst, Gompertz and Bertalanffy models, are
posted in such a manner that all the parameters involved have physical
interpretation. A model based on the interaction (distance dependent) between
the individuals (microscopic level) is also presented. This microscopic model
reachs the phenomenological models presented as particular cases. In this
approach, the Verhulst model represents the situation in which all the
individuals interact in the same way, regardless of the distance between them.
That means Verhulst model is a kind of mean field model. The other
phenomenological models are reaching from the microscopic model according to
two quantities: i) the relation between the way that the interaction decays
with the distance; and ii) the dimension of the spatial structure formed by the
population. This microscopic model allows understanding population growth by
first principles, because it predicts that some phenomenological models can be
seen as a consequence of the same individual level kind of interaction. In this
sense, the microscopic model that will be discussed here paves the way to
finding universal patterns that are common to all types of growth, even in
systems of very different nature.
| [
{
"created": "Fri, 20 May 2016 12:05:00 GMT",
"version": "v1"
}
] | 2016-05-23 | [
[
"Ribeiro",
"Fabiano L.",
""
]
] | In this work, some phenomenological models, those that are based only on the population information (macroscopic level), are deduced in an intuitive way. These models, as for instance Verhulst, Gompertz and Bertalanffy models, are posted in such a manner that all the parameters involved have physical interpretation. A model based on the interaction (distance dependent) between the individuals (microscopic level) is also presented. This microscopic model reachs the phenomenological models presented as particular cases. In this approach, the Verhulst model represents the situation in which all the individuals interact in the same way, regardless of the distance between them. That means Verhulst model is a kind of mean field model. The other phenomenological models are reaching from the microscopic model according to two quantities: i) the relation between the way that the interaction decays with the distance; and ii) the dimension of the spatial structure formed by the population. This microscopic model allows understanding population growth by first principles, because it predicts that some phenomenological models can be seen as a consequence of the same individual level kind of interaction. In this sense, the microscopic model that will be discussed here paves the way to finding universal patterns that are common to all types of growth, even in systems of very different nature. |
2304.03131 | Stephane Jamain | Ana Lokmer (UPEC), Charanraj Goud Alladi (OPTeN), R\'ejane Troudet
(UPEC), Delphine Bacq-Daian (CNRGH), Anne Boland-Auge (CNRGH), Violaine
Latapie (UPEC), Jean-Fran\c{c}ois Deleuze (CNRGH), Ravi Philip Rajkumar,
Deepak Gopal Shewade, Frank B\'elivier (OPTeN), Cynthia Marie-Claire (OPTeN),
St\'ephane Jamain (UPEC) | Risperidone response in patients with schizophrenia drives DNA
methylation changes in immune and neuronal systems | Epigenomics, 2023 | null | 10.2217/epi-2023-0017 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: The choice of efficient antipsychotic therapy for schizophrenia
relies on a time-consuming trial-and-error approach, whereas the social and
economic burdens of the disease call for faster alternatives. Material \&
methods: In a search for predictive biomarkers of antipsychotic response, blood
methylomes of 28 patients were analyzed before and 4 weeks into risperidone
therapy. Results: Several CpGs exhibiting response-specific temporal dynamics
were identified in otherwise temporally stable methylomes and noticeable global
response-related differences were observed between good and bad responders.
These were associated with genes involved in immunity, neurotransmission and
neuronal development. Polymorphisms in many of these genes were previously
linked with schizophrenia etiology and antipsychotic response. Conclusion:
Antipsychotic response seems to be shaped by both stable and medication-induced
methylation differences.
| [
{
"created": "Thu, 6 Apr 2023 15:05:45 GMT",
"version": "v1"
}
] | 2023-04-07 | [
[
"Lokmer",
"Ana",
"",
"UPEC"
],
[
"Alladi",
"Charanraj Goud",
"",
"OPTeN"
],
[
"Troudet",
"Réjane",
"",
"UPEC"
],
[
"Bacq-Daian",
"Delphine",
"",
"CNRGH"
],
[
"Boland-Auge",
"Anne",
"",
"CNRGH"
],
[
"Latapie",
"Violaine",
"",
"UPEC"
],
[
"Deleuze",
"Jean-François",
"",
"CNRGH"
],
[
"Rajkumar",
"Ravi Philip",
"",
"OPTeN"
],
[
"Shewade",
"Deepak Gopal",
"",
"OPTeN"
],
[
"Bélivier",
"Frank",
"",
"OPTeN"
],
[
"Marie-Claire",
"Cynthia",
"",
"OPTeN"
],
[
"Jamain",
"Stéphane",
"",
"UPEC"
]
] | Background: The choice of efficient antipsychotic therapy for schizophrenia relies on a time-consuming trial-and-error approach, whereas the social and economic burdens of the disease call for faster alternatives. Material \& methods: In a search for predictive biomarkers of antipsychotic response, blood methylomes of 28 patients were analyzed before and 4 weeks into risperidone therapy. Results: Several CpGs exhibiting response-specific temporal dynamics were identified in otherwise temporally stable methylomes and noticeable global response-related differences were observed between good and bad responders. These were associated with genes involved in immunity, neurotransmission and neuronal development. Polymorphisms in many of these genes were previously linked with schizophrenia etiology and antipsychotic response. Conclusion: Antipsychotic response seems to be shaped by both stable and medication-induced methylation differences. |
1308.3277 | Aaron Golden | Aaron Golden, S. George Djorgovski, John M. Greally | Astrogenomics: big data, old problems, old solutions? | 11 pages, 1 figure, accepted for publication in Genome Biology | null | null | null | q-bio.GN astro-ph.IM | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The ominous warnings of a `data deluge' in the life sciences from
high-throughput DNA sequencing data are being supplanted by a second deluge, of
cliches bemoaning our collective scientific fate unless we address the genomic
data `tsunami'. It is imperative that we explore the many facets of the genome,
not just sequence but also transcriptional and epigenetic variability,
integrating these observations in order to attain a genuine understanding of
how genes function, towards a goal of genomics-based personalized medicine.
Determining any individual's genomic properties requires comparison to many
others, sifting out the specific from the trends, requiring access to the many
in order to yield information relevant to the few. This is the central big data
challenge in genomics that still requires some sort of resolution. Is there a
practical, feasible way of directly connecting the scientific community to this
data universe? The best answer could be in the stars overhead.
| [
{
"created": "Thu, 15 Aug 2013 00:01:05 GMT",
"version": "v1"
}
] | 2013-08-16 | [
[
"Golden",
"Aaron",
""
],
[
"Djorgovski",
"S. George",
""
],
[
"Greally",
"John M.",
""
]
] | The ominous warnings of a `data deluge' in the life sciences from high-throughput DNA sequencing data are being supplanted by a second deluge, of cliches bemoaning our collective scientific fate unless we address the genomic data `tsunami'. It is imperative that we explore the many facets of the genome, not just sequence but also transcriptional and epigenetic variability, integrating these observations in order to attain a genuine understanding of how genes function, towards a goal of genomics-based personalized medicine. Determining any individual's genomic properties requires comparison to many others, sifting out the specific from the trends, requiring access to the many in order to yield information relevant to the few. This is the central big data challenge in genomics that still requires some sort of resolution. Is there a practical, feasible way of directly connecting the scientific community to this data universe? The best answer could be in the stars overhead. |
1912.01769 | Ling-Fei Huang | Lingfei Huang, Yixi Liu, Zheng Jiao, Junyan Wang, Luo Fang, Jianhua
Mao | Population Pharmacokinetic Study of Tacrolimus in Pediatric Patients
with Primary Nephrotic Syndrome: A Comparison of Linear and Nonlinear
Michaelis Menten Pharmacokinetic Model | 22 pages, 4 tables and 4 figures | Eur J Pharm Sci. 2020 Feb 15;143:105199 | 10.1016/j.ejps.2019.105199 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background Little is known about the population pharmacokinetics (PPK) of
tacrolimus (TAC) in pediatric primary nephrotic syndrome (PNS). This study
aimed to compare the predictive performance between nonlinear and linear PK
models and investigate the significant factors of TAC PK characteristics in
pediatric PNS. Methods Data were obtained from 71 pediatric patients with PNS,
along with 525 TAC trough concentrations at steady state. The demographic,
medical, and treatment details were collected. Genetic polymorphisms were
analyzed. The PPK models were developed using nonlinear mixed effects model
software. Two modeling strategies, linear compartmental and nonlinear Michaelis
Menten (MM) models, were evaluated and compared. Results Body weight, age,
daily dose of TAC, co-therapy drugs (including azole antifungal agents and
diltiazem), and CYP3A5*3 genotype were important factors in the final linear
model (onecompartment model), whereas only body weight, codrugs, and CYP3A5*3
genotype were the important factors in the nonlinear MM model. Apparent
clearance and volume of distribution in the final linear model were 7.13 L/h
and 142 L, respectively. The maximal dose rate (Vmax) of the nonlinear MM model
was 1.92 mg/day and the average concentration at steady state at half-Vmax (Km)
was 1.98 ng/mL. The nonlinear model described the data better than the linear
model. Dosing regimens were proposed based on the nonlinear PK model.Conclusion
Our findings demonstrate that the nonlinear MM model showed better predictive
performance than the linear compartmental model, providing reliable support for
optimizing TAC dosing and adjustment in children with PNS.
| [
{
"created": "Wed, 4 Dec 2019 02:12:07 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Feb 2020 10:04:13 GMT",
"version": "v2"
}
] | 2020-02-27 | [
[
"Huang",
"Lingfei",
""
],
[
"Liu",
"Yixi",
""
],
[
"Jiao",
"Zheng",
""
],
[
"Wang",
"Junyan",
""
],
[
"Fang",
"Luo",
""
],
[
"Mao",
"Jianhua",
""
]
] | Background Little is known about the population pharmacokinetics (PPK) of tacrolimus (TAC) in pediatric primary nephrotic syndrome (PNS). This study aimed to compare the predictive performance between nonlinear and linear PK models and investigate the significant factors of TAC PK characteristics in pediatric PNS. Methods Data were obtained from 71 pediatric patients with PNS, along with 525 TAC trough concentrations at steady state. The demographic, medical, and treatment details were collected. Genetic polymorphisms were analyzed. The PPK models were developed using nonlinear mixed effects model software. Two modeling strategies, linear compartmental and nonlinear Michaelis Menten (MM) models, were evaluated and compared. Results Body weight, age, daily dose of TAC, co-therapy drugs (including azole antifungal agents and diltiazem), and CYP3A5*3 genotype were important factors in the final linear model (onecompartment model), whereas only body weight, codrugs, and CYP3A5*3 genotype were the important factors in the nonlinear MM model. Apparent clearance and volume of distribution in the final linear model were 7.13 L/h and 142 L, respectively. The maximal dose rate (Vmax) of the nonlinear MM model was 1.92 mg/day and the average concentration at steady state at half-Vmax (Km) was 1.98 ng/mL. The nonlinear model described the data better than the linear model. Dosing regimens were proposed based on the nonlinear PK model.Conclusion Our findings demonstrate that the nonlinear MM model showed better predictive performance than the linear compartmental model, providing reliable support for optimizing TAC dosing and adjustment in children with PNS. |
1311.3997 | Ron Nielsen | Ron W Nielsen aka Jan Nurzynski | No stagnation in the growth of population | 9 pages, 1 figure, 2 tables | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Growth of human population shows no signs of stagnation. The only small
disturbance is identified as being probably associated with the coinciding
impacts of five demographic catastrophes. The concept of the Epoch of
Malthusian Stagnation is convincingly contradicted by empirical evidence.
| [
{
"created": "Fri, 15 Nov 2013 23:11:54 GMT",
"version": "v1"
}
] | 2013-11-19 | [
[
"Nurzynski",
"Ron W Nielsen aka Jan",
""
]
] | Growth of human population shows no signs of stagnation. The only small disturbance is identified as being probably associated with the coinciding impacts of five demographic catastrophes. The concept of the Epoch of Malthusian Stagnation is convincingly contradicted by empirical evidence. |
q-bio/0512033 | Pablo Echenique | Pablo Echenique and Ivan Calvo | Explicit factorization of external coordinates in constrained
Statistical Mechanics models | 22 pages, 2 figures, LaTeX, AMSTeX. v2: Introduccion slightly
extended. Version in arXiv is slightly larger than the published one | J. Comp. Chem. 27 (2006) 1748-1755 | 10.1002/jcc.20499 | null | q-bio.QM cond-mat.soft | null | If a macromolecule is described by curvilinear coordinates or rigid
constraints are imposed, the equilibrium probability density that must be
sampled in Monte Carlo simulations includes the determinants of different
mass-metric tensors. In this work, we explicitly write the determinant of the
mass-metric tensor G and of the reduced mass-metric tensor g, for any molecule,
general internal coordinates and arbitrary constraints, as a product of two
functions; one depending only on the external coordinates that describe the
overall translation and rotation of the system, and the other only on the
internal coordinates. This work extends previous results in the literature,
proving with full generality that one may integrate out the external
coordinates and perform Monte Carlo simulations in the internal conformational
space of macromolecules. In addition, we give a general mathematical argument
showing that the factorization is a consequence of the symmetries of the metric
tensors involved. Finally, the determinant of the mass-metric tensor G is
computed explicitly in a set of curvilinear coordinates specially well-suited
for general branched molecules.
| [
{
"created": "Thu, 15 Dec 2005 18:49:43 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Feb 2006 16:28:28 GMT",
"version": "v2"
},
{
"created": "Wed, 26 Apr 2006 11:09:30 GMT",
"version": "v3"
},
{
"created": "Mon, 4 Dec 2006 17:46:41 GMT",
"version": "v4"
}
] | 2007-12-19 | [
[
"Echenique",
"Pablo",
""
],
[
"Calvo",
"Ivan",
""
]
] | If a macromolecule is described by curvilinear coordinates or rigid constraints are imposed, the equilibrium probability density that must be sampled in Monte Carlo simulations includes the determinants of different mass-metric tensors. In this work, we explicitly write the determinant of the mass-metric tensor G and of the reduced mass-metric tensor g, for any molecule, general internal coordinates and arbitrary constraints, as a product of two functions; one depending only on the external coordinates that describe the overall translation and rotation of the system, and the other only on the internal coordinates. This work extends previous results in the literature, proving with full generality that one may integrate out the external coordinates and perform Monte Carlo simulations in the internal conformational space of macromolecules. In addition, we give a general mathematical argument showing that the factorization is a consequence of the symmetries of the metric tensors involved. Finally, the determinant of the mass-metric tensor G is computed explicitly in a set of curvilinear coordinates specially well-suited for general branched molecules. |
2010.02897 | Emerson Sadurn\'i | E. Sadurn\'i and G. Luna-Acosta | Exactly solvable SIR models, their extensions and their application to
sensitive pandemic forecasting | 27 pages, single column, 10 figures | null | null | null | q-bio.PE nlin.CD physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The classic SIR model of epidemic dynamics is solved completely by
quadratures, including a time integral transform expanded in a series of
incomplete gamma functions. The model is also generalized to arbitrary
time-dependent infection rates and solved explicitly when the control parameter
depends on the accumulated infections at time $t$. Numerical results are
presented by way of comparison. Autonomous and non-autonomous generalizations
of SIR for interacting regions are also considered, including non-separability
for two or more interacting regions. A reduction of simple SIR models to one
variable leads us to a generalized logistic model, Richards model, which we use
to fit Mexico's COVID-19 data up to day number 134. Forecasting scenarios
resulting from various fittings are discussed. A critique to the applicability
of these models to current pandemic outbreaks in terms of robustness is
provided. Finally, we obtain the bifurcation diagram for a discretized version
of Richards model, displaying period doubling bifurcation to chaos.
| [
{
"created": "Tue, 6 Oct 2020 17:28:23 GMT",
"version": "v1"
}
] | 2020-10-07 | [
[
"Sadurní",
"E.",
""
],
[
"Luna-Acosta",
"G.",
""
]
] | The classic SIR model of epidemic dynamics is solved completely by quadratures, including a time integral transform expanded in a series of incomplete gamma functions. The model is also generalized to arbitrary time-dependent infection rates and solved explicitly when the control parameter depends on the accumulated infections at time $t$. Numerical results are presented by way of comparison. Autonomous and non-autonomous generalizations of SIR for interacting regions are also considered, including non-separability for two or more interacting regions. A reduction of simple SIR models to one variable leads us to a generalized logistic model, Richards model, which we use to fit Mexico's COVID-19 data up to day number 134. Forecasting scenarios resulting from various fittings are discussed. A critique to the applicability of these models to current pandemic outbreaks in terms of robustness is provided. Finally, we obtain the bifurcation diagram for a discretized version of Richards model, displaying period doubling bifurcation to chaos. |
0912.2955 | Pan-Jun Kim | Caroline B. Milne, Pan-Jun Kim, James A. Eddy, Nathan D. Price | Accomplishments in Genome-Scale In Silico Modeling for Industrial and
Medical Biotechnology | Highlighted in "In this issue" section of Biotechnology Journal
12/2009 | Biotechnol. J. 4, 1653 (2009) | 10.1002/biot.200900234 | null | q-bio.MN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Driven by advancements in high-throughput biological technologies and the
growing number of sequenced genomes, the construction of in silico models at
the genome scale has provided powerful tools to investigate a vast array of
biological systems and applications. Here, we review comprehensively the uses
of such models in industrial and medical biotechnology, including biofuel
generation, food production, and drug development. While the use of in silico
models is still in its early stages for delivering to industry, significant
initial successes have been achieved. For the cases presented here,
genome-scale models predict engineering strategies to enhance properties of
interest in an organism or to inhibit harmful mechanisms of pathogens. Going
forward, genome-scale in silico models promise to extend their application and
analysis scope to become a transformative tool in biotechnology.
| [
{
"created": "Tue, 15 Dec 2009 17:13:55 GMT",
"version": "v1"
}
] | 2009-12-16 | [
[
"Milne",
"Caroline B.",
""
],
[
"Kim",
"Pan-Jun",
""
],
[
"Eddy",
"James A.",
""
],
[
"Price",
"Nathan D.",
""
]
] | Driven by advancements in high-throughput biological technologies and the growing number of sequenced genomes, the construction of in silico models at the genome scale has provided powerful tools to investigate a vast array of biological systems and applications. Here, we review comprehensively the uses of such models in industrial and medical biotechnology, including biofuel generation, food production, and drug development. While the use of in silico models is still in its early stages for delivering to industry, significant initial successes have been achieved. For the cases presented here, genome-scale models predict engineering strategies to enhance properties of interest in an organism or to inhibit harmful mechanisms of pathogens. Going forward, genome-scale in silico models promise to extend their application and analysis scope to become a transformative tool in biotechnology. |
0711.0874 | Damian H. Zanette | Damian H. Zanette, Sebastian Risau Gusman | Infection spreading in a population with evolving contacts | null | null | null | null | q-bio.PE cond-mat.stat-mech physics.soc-ph | null | We study the spreading of an infection within an SIS epidemiological model on
a network. Susceptible agents are given the opportunity of breaking their links
with infected agents. Broken links are either permanently removed or
reconnected with the rest of the population. Thus, the network coevolves with
the population as the infection progresses. We show that a moderate
reconnection frequency is enough to completely suppress the infection. A
partial, rather weak isolation of infected agents suffices to eliminate the
endemic state.
| [
{
"created": "Tue, 6 Nov 2007 13:47:51 GMT",
"version": "v1"
}
] | 2007-11-07 | [
[
"Zanette",
"Damian H.",
""
],
[
"Gusman",
"Sebastian Risau",
""
]
] | We study the spreading of an infection within an SIS epidemiological model on a network. Susceptible agents are given the opportunity of breaking their links with infected agents. Broken links are either permanently removed or reconnected with the rest of the population. Thus, the network coevolves with the population as the infection progresses. We show that a moderate reconnection frequency is enough to completely suppress the infection. A partial, rather weak isolation of infected agents suffices to eliminate the endemic state. |
2005.10644 | Alun Stokes | Alun Stokes, William Hum, Jonathan Zaslavsky | A Minimal-Input Multilayer Perceptron for Predicting Drug-Drug
Interactions Without Knowledge of Drug Structure | null | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The necessity of predictive models in the drug discovery industry cannot be
understated. With the sheer volume of potentially useful compounds that are
considered for use, it is becoming increasingly computationally difficult to
investigate the overlapping interactions between drugs. Understanding this is
also important to the layperson who needs to know what they can and cannot mix,
especially for those who use recreational drugs - which do not have the same
rigorous warnings as prescription drugs. Without access to deterministic,
experimental results for every drug combination, other methods are necessary to
bridge this knowledge gap. Ideally, such a method would require minimal inputs,
have high accuracy, and be computationally feasible. We have not come across a
model that meets all these criteria. To this end, we propose a minimal-input
multi-layer perceptron that predicts the interactions between two drugs. This
model has a great advantage of requiring no structural knowledge of the
molecules in question, and instead only uses experimentally accessible chemical
and physical properties - 20 per compound in total. Using a set of known
drug-drug interactions, and associated properties of the drugs involved, we
trained our model on a dataset of about 650,000 entries. We report an accuracy
of 0.968 on unseen samples of interactions between drugs on which the model was
trained, and an accuracy of 0.942 on unseen samples of interactions between
unseen drugs. We believe this to be a promising and highly extensible model
that has potential for high generalized predictive accuracy with further
tuning.
| [
{
"created": "Wed, 20 May 2020 17:15:19 GMT",
"version": "v1"
}
] | 2020-05-22 | [
[
"Stokes",
"Alun",
""
],
[
"Hum",
"William",
""
],
[
"Zaslavsky",
"Jonathan",
""
]
] | The necessity of predictive models in the drug discovery industry cannot be understated. With the sheer volume of potentially useful compounds that are considered for use, it is becoming increasingly computationally difficult to investigate the overlapping interactions between drugs. Understanding this is also important to the layperson who needs to know what they can and cannot mix, especially for those who use recreational drugs - which do not have the same rigorous warnings as prescription drugs. Without access to deterministic, experimental results for every drug combination, other methods are necessary to bridge this knowledge gap. Ideally, such a method would require minimal inputs, have high accuracy, and be computationally feasible. We have not come across a model that meets all these criteria. To this end, we propose a minimal-input multi-layer perceptron that predicts the interactions between two drugs. This model has a great advantage of requiring no structural knowledge of the molecules in question, and instead only uses experimentally accessible chemical and physical properties - 20 per compound in total. Using a set of known drug-drug interactions, and associated properties of the drugs involved, we trained our model on a dataset of about 650,000 entries. We report an accuracy of 0.968 on unseen samples of interactions between drugs on which the model was trained, and an accuracy of 0.942 on unseen samples of interactions between unseen drugs. We believe this to be a promising and highly extensible model that has potential for high generalized predictive accuracy with further tuning. |
q-bio/0401020 | Ryan Taft | R.J. Taft and J.S. Mattick | Increasing biological complexity is positively correlated with the
relative genome-wide expansion of non-protein-coding DNA sequences | 25 pages, 2 figures, 1 table | Genome Biology Preprint Depository -
http://genomebiology.com/2003/5/1/P1 | null | null | q-bio.GN q-bio.PE | null | Background: Prior to the current genomic era it was suggested that the number
of protein-coding genes that an organism made use of was a valid measure of its
complexity. It is now clear, however, that major incongruities exist and that
there is only a weak relationship between biological complexity and the number
of protein coding genes. For example, using the protein-coding gene number as a
basis for evaluating biological complexity would make urochordates and insects
less complex than nematodes, and humans less complex than rice. Results: We
analyzed the ratio of noncoding to total genomic DNA (ncDNA/tgDNA) for 85
sequenced species and found that this ratio correlates well with increasing
biological complexity. The ncDNA/tgDNA ratio is generally contained within the
bandwidth of 0.05-0.24 for prokaryotes, but rises to 0.26-0.52 in unicellular
eukaryotes, and to 0.62-0.985 for developmentally complex multicellular
organisms. Significantly, prokaryotic species display a non-uniform species
distribution approaching the mean of 0.1177 ncDNA/tgDNA (p=1.58 x 10^-13), and
a nonlinear ncDNA/tgDNA relationship to genome size (r=0.15). Importantly, the
ncDNA/tgDNA ratio corrects for ploidy, and is not substantially affected by
variable loads of repetitive sequences. Conclusions: We suggest that the
observed noncoding DNA increases and compositional patterns are primarily a
function of increased information content. It is therefore possible that
introns, intergenic sequences, repeat elements, and genomic DNA previously
regarded as genetically inert may be far more important to the evolution and
functional repertoire of complex organisms than has been previously
appreciated.
| [
{
"created": "Thu, 15 Jan 2004 00:39:29 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Taft",
"R. J.",
""
],
[
"Mattick",
"J. S.",
""
]
] | Background: Prior to the current genomic era it was suggested that the number of protein-coding genes that an organism made use of was a valid measure of its complexity. It is now clear, however, that major incongruities exist and that there is only a weak relationship between biological complexity and the number of protein coding genes. For example, using the protein-coding gene number as a basis for evaluating biological complexity would make urochordates and insects less complex than nematodes, and humans less complex than rice. Results: We analyzed the ratio of noncoding to total genomic DNA (ncDNA/tgDNA) for 85 sequenced species and found that this ratio correlates well with increasing biological complexity. The ncDNA/tgDNA ratio is generally contained within the bandwidth of 0.05-0.24 for prokaryotes, but rises to 0.26-0.52 in unicellular eukaryotes, and to 0.62-0.985 for developmentally complex multicellular organisms. Significantly, prokaryotic species display a non-uniform species distribution approaching the mean of 0.1177 ncDNA/tgDNA (p=1.58 x 10^-13), and a nonlinear ncDNA/tgDNA relationship to genome size (r=0.15). Importantly, the ncDNA/tgDNA ratio corrects for ploidy, and is not substantially affected by variable loads of repetitive sequences. Conclusions: We suggest that the observed noncoding DNA increases and compositional patterns are primarily a function of increased information content. It is therefore possible that introns, intergenic sequences, repeat elements, and genomic DNA previously regarded as genetically inert may be far more important to the evolution and functional repertoire of complex organisms than has been previously appreciated. |
1108.1788 | Marco Zoli | Marco Zoli | Thermodynamics of Twisted DNA with Solvent Interaction | The Journal of Chemical Physics (2011) in press | J. Chem. Phys. vol. 135, 115101 (2011).
http://link.aip.org/link/?JCP/135/115101 | 10.1063/1.3631564 | null | q-bio.BM cond-mat.soft cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The imaginary time path integral formalism is applied to a nonlinear
Hamiltonian for a short fragment of heterogeneous DNA with a stabilizing
solvent interaction term. Torsional effects are modeled by a twist angle
between neighboring base pairs stacked along the molecule backbone. The base
pair displacements are described by an ensemble of temperature dependent paths
thus incorporating those fluctuational effects which shape the multisteps
thermal denaturation. By summing over $\sim 10^7 - 10^8$ base pair paths, a
large number of double helix configurations is taken into account consistently
with the physical requirements of the model potential. The partition function
is computed as a function of the twist. It is found that the equilibrium twist
angle, peculiar of B-DNA at room temperature, yields the stablest helicoidal
geometry against thermal disruption of the base pair hydrogen bonds. This
result is corroborated by the computation of thermodynamical properties such as
fractions of open base pairs and specific heat.
| [
{
"created": "Mon, 8 Aug 2011 19:43:40 GMT",
"version": "v1"
}
] | 2011-09-19 | [
[
"Zoli",
"Marco",
""
]
] | The imaginary time path integral formalism is applied to a nonlinear Hamiltonian for a short fragment of heterogeneous DNA with a stabilizing solvent interaction term. Torsional effects are modeled by a twist angle between neighboring base pairs stacked along the molecule backbone. The base pair displacements are described by an ensemble of temperature dependent paths thus incorporating those fluctuational effects which shape the multisteps thermal denaturation. By summing over $\sim 10^7 - 10^8$ base pair paths, a large number of double helix configurations is taken into account consistently with the physical requirements of the model potential. The partition function is computed as a function of the twist. It is found that the equilibrium twist angle, peculiar of B-DNA at room temperature, yields the stablest helicoidal geometry against thermal disruption of the base pair hydrogen bonds. This result is corroborated by the computation of thermodynamical properties such as fractions of open base pairs and specific heat. |
1503.02901 | Eugenio Urdapilleta | Eugenio Urdapilleta and In\'es Samengo | Effects of spike-triggered negative feedback on receptive-field
properties | 46 pages, 10 figures. Preprint accepted for publication in J. Comput.
Neurosci | J. Comput. Neurosci. 38(2), 405-425, 2015 | 10.1007/s10827-014-0546-0 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sensory neurons are often described in terms of a receptive field, that is, a
linear kernel through which stimuli are filtered before they are further
processed. If information transmission is assumed to proceed in a feedforward
cascade, the receptive field may be interpreted as the external stimulus'
profile maximizing neuronal output. The nervous system, however, contains many
feedback loops, and sensory neurons filter more currents than the ones
representing the transduced external stimulus. Some of the additional currents
are generated by the output activity of the neuron itself, and therefore
constitute feedback signals. By means of a time-frequency analysis of the
input/output transformation, here we show how feedback modifies the receptive
field. The model is applicable to various types of feedback processes, from
spike triggered intrinsic conductances to inhibitory synaptic inputs from
nearby neurons. We distinguish between the intrinsic receptive field (filtering
all input currents) and the effective receptive field (filtering only external
stimuli). Whereas the intrinsic receptive field summarizes the biophysical
properties of the neuron associated to subthreshold integration and spike
generation, only the effective receptive field can be interpreted as the
external stimulus' profile maximizing neuronal output. We demonstrate that
spike-triggered feedback shifts low-pass filtering towards band-pass
processing, transforming integrator neurons into resonators. For strong
feedback, a sharp resonance in the spectral neuronal selectivity may appear.
Our results provide a unified framework to interpret a collection of previous
experimental studies where specific feedback mechanisms were shown to modify
the filtering properties of neurons.
| [
{
"created": "Tue, 10 Mar 2015 13:45:59 GMT",
"version": "v1"
}
] | 2015-03-11 | [
[
"Urdapilleta",
"Eugenio",
""
],
[
"Samengo",
"Inés",
""
]
] | Sensory neurons are often described in terms of a receptive field, that is, a linear kernel through which stimuli are filtered before they are further processed. If information transmission is assumed to proceed in a feedforward cascade, the receptive field may be interpreted as the external stimulus' profile maximizing neuronal output. The nervous system, however, contains many feedback loops, and sensory neurons filter more currents than the ones representing the transduced external stimulus. Some of the additional currents are generated by the output activity of the neuron itself, and therefore constitute feedback signals. By means of a time-frequency analysis of the input/output transformation, here we show how feedback modifies the receptive field. The model is applicable to various types of feedback processes, from spike triggered intrinsic conductances to inhibitory synaptic inputs from nearby neurons. We distinguish between the intrinsic receptive field (filtering all input currents) and the effective receptive field (filtering only external stimuli). Whereas the intrinsic receptive field summarizes the biophysical properties of the neuron associated to subthreshold integration and spike generation, only the effective receptive field can be interpreted as the external stimulus' profile maximizing neuronal output. We demonstrate that spike-triggered feedback shifts low-pass filtering towards band-pass processing, transforming integrator neurons into resonators. For strong feedback, a sharp resonance in the spectral neuronal selectivity may appear. Our results provide a unified framework to interpret a collection of previous experimental studies where specific feedback mechanisms were shown to modify the filtering properties of neurons. |
q-bio/0411045 | Eli Eisenberg | Erez Y. Levanon, Eli Eisenberg, Rodrigo Yelin, Sergey Nemzer, Martina
Hallegger, Ronen Shemesh, Zipora Y. Fligelman, Avi Shoshan, Sarah R. Pollock,
Dan Sztybel, Moshe Olshansky, Gideon Rechavi and Michael F. Jantsch | Systematic identification of abundant A-to-I editing sites in the human
transcriptome | Pre-print version. See http://dx.doi.org/10.1038/nbt996 for a reprint | Nature Biotechnology 22, 1001-1005 (2004) | 10.1038/nbt996 | null | q-bio.GN | null | RNA editing by members of the double-stranded RNA-specific ADAR family leads
to site-specific conversion of adenosine to inosine (A-to-I) in precursor
messenger RNAs. Editing by ADARs is believed to occur in all metazoa, and is
essential for mammalian development. Currently, only a limited number of human
ADAR substrates are known, while indirect evidence suggests a substantial
fraction of all pre-mRNAs being affected. Here we describe a computational
search for ADAR editing sites in the human transcriptome, using millions of
available expressed sequences. 12,723 A-to-I editing sites were mapped in 1,637
different genes, with an estimated accuracy of 95%, raising the number of known
editing sites by two orders of magnitude. We experimentally validated our
method by verifying the occurrence of editing in 26 novel substrates. A-to-I
editing in humans primarily occurs in non-coding regions of the RNA, typically
in Alu repeats. Analysis of the large set of editing sites indicates the role
of editing in controlling dsRNA stability.
| [
{
"created": "Thu, 25 Nov 2004 20:47:34 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Levanon",
"Erez Y.",
""
],
[
"Eisenberg",
"Eli",
""
],
[
"Yelin",
"Rodrigo",
""
],
[
"Nemzer",
"Sergey",
""
],
[
"Hallegger",
"Martina",
""
],
[
"Shemesh",
"Ronen",
""
],
[
"Fligelman",
"Zipora Y.",
""
],
[
"Shoshan",
"Avi",
""
],
[
"Pollock",
"Sarah R.",
""
],
[
"Sztybel",
"Dan",
""
],
[
"Olshansky",
"Moshe",
""
],
[
"Rechavi",
"Gideon",
""
],
[
"Jantsch",
"Michael F.",
""
]
] | RNA editing by members of the double-stranded RNA-specific ADAR family leads to site-specific conversion of adenosine to inosine (A-to-I) in precursor messenger RNAs. Editing by ADARs is believed to occur in all metazoa, and is essential for mammalian development. Currently, only a limited number of human ADAR substrates are known, while indirect evidence suggests a substantial fraction of all pre-mRNAs being affected. Here we describe a computational search for ADAR editing sites in the human transcriptome, using millions of available expressed sequences. 12,723 A-to-I editing sites were mapped in 1,637 different genes, with an estimated accuracy of 95%, raising the number of known editing sites by two orders of magnitude. We experimentally validated our method by verifying the occurrence of editing in 26 novel substrates. A-to-I editing in humans primarily occurs in non-coding regions of the RNA, typically in Alu repeats. Analysis of the large set of editing sites indicates the role of editing in controlling dsRNA stability. |
1002.4903 | Alexei Koulakov | Alexei A. Koulakov and Dmitry Rinberg | Sparse incomplete representations: A novel role for olfactory granule
cells | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mitral cells of the olfactory bulb form sparse representations of the
odorants and transmit this information to the cortex. The olfactory code
carried by the mitral cells is sparser than the inputs that they receive. In
this study we analyze the mechanisms and functional significance of sparse
olfactory codes. We consider a model of olfactory bulb containing populations
of excitatory mitral and inhibitory granule cells. We argue that sparse codes
may emerge as a result of self organization in the network leading to the
precise balance between mitral cells' excitatory inputs and inhibition provided
by the granule cells. We propose a novel role for the olfactory granule cells.
We show that these cells can build representations of odorant stimuli that are
not fully accurate. Due to the incompleteness in the granule cell
representation, the exact excitation-inhibition balance is established only for
some mitral cells leading to sparse responses of the mitral cell. Our model
suggests a functional significance to the dendrodendritic synapses that mediate
interactions between mitral and granule cells. The model accounts for the
sparse olfactory code in the steady state and predicts that transient dynamics
may be less sparse.
| [
{
"created": "Fri, 26 Feb 2010 00:33:16 GMT",
"version": "v1"
}
] | 2010-03-01 | [
[
"Koulakov",
"Alexei A.",
""
],
[
"Rinberg",
"Dmitry",
""
]
] | Mitral cells of the olfactory bulb form sparse representations of the odorants and transmit this information to the cortex. The olfactory code carried by the mitral cells is sparser than the inputs that they receive. In this study we analyze the mechanisms and functional significance of sparse olfactory codes. We consider a model of olfactory bulb containing populations of excitatory mitral and inhibitory granule cells. We argue that sparse codes may emerge as a result of self organization in the network leading to the precise balance between mitral cells' excitatory inputs and inhibition provided by the granule cells. We propose a novel role for the olfactory granule cells. We show that these cells can build representations of odorant stimuli that are not fully accurate. Due to the incompleteness in the granule cell representation, the exact excitation-inhibition balance is established only for some mitral cells leading to sparse responses of the mitral cell. Our model suggests a functional significance to the dendrodendritic synapses that mediate interactions between mitral and granule cells. The model accounts for the sparse olfactory code in the steady state and predicts that transient dynamics may be less sparse. |
0810.2872 | Matthias Bethge | Jan Eichhorn, Fabian Sinz and Matthias Bethge | Natural Image Coding in V1: How Much Use is Orientation Selectivity? | null | null | 10.1371/journal.pcbi.1000336 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Orientation selectivity is the most striking feature of simple cell coding in
V1 which has been shown to emerge from the reduction of higher-order
correlations in natural images in a large variety of statistical image models.
The most parsimonious one among these models is linear Independent Component
Analysis (ICA), whereas second-order decorrelation transformations such as
Principal Component Analysis (PCA) do not yield oriented filters. Because of
this finding it has been suggested that the emergence of orientation
selectivity may be explained by higher-order redundancy reduction. In order to
assess the tenability of this hypothesis, it is an important empirical question
how much more redundancies can be removed with ICA in comparison to PCA, or
other second-order decorrelation methods. This question has not yet been
settled, as over the last ten years contradicting results have been reported
ranging from less than five to more than hundred percent extra gain for ICA.
Here, we aim at resolving this conflict by presenting a very careful and
comprehensive analysis using three evaluation criteria related to redundancy
reduction: In addition to the multi-information and the average log-loss we
compute, for the first time, complete rate-distortion curves for ICA in
comparison with PCA. Without exception, we find that the advantage of the ICA
filters is surprisingly small. Furthermore, we show that a simple spherically
symmetric distribution with only two parameters can fit the data even better
than the probabilistic model underlying ICA. Since spherically symmetric models
are agnostic with respect to the specific filter shapes, we conlude that
orientation selectivity is unlikely to play a critical role for redundancy
reduction.
| [
{
"created": "Thu, 16 Oct 2008 19:11:39 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Oct 2008 20:26:01 GMT",
"version": "v2"
}
] | 2015-05-13 | [
[
"Eichhorn",
"Jan",
""
],
[
"Sinz",
"Fabian",
""
],
[
"Bethge",
"Matthias",
""
]
] | Orientation selectivity is the most striking feature of simple cell coding in V1 which has been shown to emerge from the reduction of higher-order correlations in natural images in a large variety of statistical image models. The most parsimonious one among these models is linear Independent Component Analysis (ICA), whereas second-order decorrelation transformations such as Principal Component Analysis (PCA) do not yield oriented filters. Because of this finding it has been suggested that the emergence of orientation selectivity may be explained by higher-order redundancy reduction. In order to assess the tenability of this hypothesis, it is an important empirical question how much more redundancies can be removed with ICA in comparison to PCA, or other second-order decorrelation methods. This question has not yet been settled, as over the last ten years contradicting results have been reported ranging from less than five to more than hundred percent extra gain for ICA. Here, we aim at resolving this conflict by presenting a very careful and comprehensive analysis using three evaluation criteria related to redundancy reduction: In addition to the multi-information and the average log-loss we compute, for the first time, complete rate-distortion curves for ICA in comparison with PCA. Without exception, we find that the advantage of the ICA filters is surprisingly small. Furthermore, we show that a simple spherically symmetric distribution with only two parameters can fit the data even better than the probabilistic model underlying ICA. Since spherically symmetric models are agnostic with respect to the specific filter shapes, we conlude that orientation selectivity is unlikely to play a critical role for redundancy reduction. |
2005.14533 | Ling Xue Dr | Ling Xue, Shuanglin Jing, Joel C. Miller, Wei Sun, Huafeng Li, Jose
Guillermo Estrada-Franco, James M Hyman, Huaiping Zhu | A Data-Driven Network Model for the Emerging COVID-19 Epidemics in
Wuhan, Toronto and Italy | null | Mathematical Biosciences 2020 | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ongoing Coronavirus Disease 2019 (COVID-19) pandemic threatens the health
of humans and causes great economic losses. Predictive modelling and
forecasting the epidemic trends are essential for developing countermeasures to
mitigate this pandemic. We develop a network model, where each node represents
an individual and the edges represent contacts between individuals where the
infection can spread. The individuals are classified based on the number of
contacts they have each day (their node degrees) and their infection status.
The transmission network model was respectively fitted to the reported data for
the COVID-19 epidemic in Wuhan (China), Toronto (Canada), and the Italian
Republic using a Markov Chain Monte Carlo (MCMC) optimization algorithm. Our
model fits all three regions well with narrow confidence intervals and could be
adapted to simulate other megacities or regions. The model projections on the
role of containment strategies can help inform public health authorities to
plan control measures.
| [
{
"created": "Thu, 28 May 2020 14:35:32 GMT",
"version": "v1"
}
] | 2020-06-01 | [
[
"Xue",
"Ling",
""
],
[
"Jing",
"Shuanglin",
""
],
[
"Miller",
"Joel C.",
""
],
[
"Sun",
"Wei",
""
],
[
"Li",
"Huafeng",
""
],
[
"Estrada-Franco",
"Jose Guillermo",
""
],
[
"Hyman",
"James M",
""
],
[
"Zhu",
"Huaiping",
""
]
] | The ongoing Coronavirus Disease 2019 (COVID-19) pandemic threatens the health of humans and causes great economic losses. Predictive modelling and forecasting the epidemic trends are essential for developing countermeasures to mitigate this pandemic. We develop a network model, where each node represents an individual and the edges represent contacts between individuals where the infection can spread. The individuals are classified based on the number of contacts they have each day (their node degrees) and their infection status. The transmission network model was respectively fitted to the reported data for the COVID-19 epidemic in Wuhan (China), Toronto (Canada), and the Italian Republic using a Markov Chain Monte Carlo (MCMC) optimization algorithm. Our model fits all three regions well with narrow confidence intervals and could be adapted to simulate other megacities or regions. The model projections on the role of containment strategies can help inform public health authorities to plan control measures. |
1806.07341 | Dong Xu | Chao Fang, Yi Shang, and Dong Xu | Improving Protein Gamma-Turn Prediction Using Inception Capsule Networks | null | null | null | null | q-bio.QM q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein gamma-turn prediction is useful in protein function studies and
experimental design. Several methods for gamma-turn prediction have been
developed, but the results were unsatisfactory with Matthew correlation
coefficients (MCC) around 0.2-0.4. One reason for the low prediction accuracy
is the limited capacity of the methods; in particular, the traditional
machine-learning methods like SVM may not extract high-level features well to
distinguish between turn or non-turn. Hence, it is worthwhile exploring new
machine-learning methods for the prediction. A cutting-edge deep neural
network, named Capsule Network (CapsuleNet), provides a new opportunity for
gamma-turn prediction. Even when the number of input samples is relatively
small, the capsules from CapsuleNet are very effective to extract high-level
features for classification tasks. Here, we propose a deep inception capsule
network for gamma-turn prediction. Its performance on the gamma-turn benchmark
GT320 achieved an MCC of 0.45, which significantly outperformed the previous
best method with an MCC of 0.38. This is the first gamma-turn prediction method
utilizing deep neural networks. Also, to our knowledge, it is the first
published bioinformatics application utilizing capsule network, which will
provides a useful example for the community.
| [
{
"created": "Mon, 11 Jun 2018 09:07:35 GMT",
"version": "v1"
}
] | 2018-06-20 | [
[
"Fang",
"Chao",
""
],
[
"Shang",
"Yi",
""
],
[
"Xu",
"Dong",
""
]
] | Protein gamma-turn prediction is useful in protein function studies and experimental design. Several methods for gamma-turn prediction have been developed, but the results were unsatisfactory with Matthew correlation coefficients (MCC) around 0.2-0.4. One reason for the low prediction accuracy is the limited capacity of the methods; in particular, the traditional machine-learning methods like SVM may not extract high-level features well to distinguish between turn or non-turn. Hence, it is worthwhile exploring new machine-learning methods for the prediction. A cutting-edge deep neural network, named Capsule Network (CapsuleNet), provides a new opportunity for gamma-turn prediction. Even when the number of input samples is relatively small, the capsules from CapsuleNet are very effective to extract high-level features for classification tasks. Here, we propose a deep inception capsule network for gamma-turn prediction. Its performance on the gamma-turn benchmark GT320 achieved an MCC of 0.45, which significantly outperformed the previous best method with an MCC of 0.38. This is the first gamma-turn prediction method utilizing deep neural networks. Also, to our knowledge, it is the first published bioinformatics application utilizing capsule network, which will provides a useful example for the community. |
1308.3673 | Michael Thomas | Wendy K. Caldwell, Benjamin Freedman, Luke Settles, Michael M. Thomas,
Anarina Murillo, Erika Camacho, Stephen Wirkus | Substance Abuse via Legally Prescribed Drugs: The Case of Vicodin in the
United States | null | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vicodin is the most commonly prescribed pain reliever in the United States.
Research indicates that there are two million people who are currently abusing
Vicodin, and the majority of those who abuse Vicodin were initially exposed to
it via prescription. Our goal is to determine the most effective strategies for
reducing the overall population of Vicodin abusers. More specifically, we focus
on whether prevention methods aimed at educating doctors and patients on the
potential for drug abuse or treatment methods implemented after a person abuses
Vicodin will have a greater overall impact. We consider one linear and two
non-linear compartmental models in which medical users of Vicodin can
transition into the abuser compartment or leave the population by no longer
taking the drug. Once Vicodin abusers, people can transition into a treatment
compartment, with the possibility of leaving the population through successful
completion of treatment or of relapsing and re-entering the abusive
compartment. The linear model assumes no social interaction, while both
non-linear models consider interaction. One considers interaction with abusers
affecting the relapse rate, while the other assumes both this and an additional
interaction between the number of abusers and the number of new prescriptions.
Sensitivity analyses are conducted varying the rates of success of these
intervention methods measured by the parameters to determine which strategy has
the greatest impact on controlling the population of Vicodin abusers. From
these models and analyses, we determine that manipulating parameters tied to
prevention measures has a greater impact on reducing the population of abusers
than manipulating parameters associated with treatment. We also note that
increasing the rate at which abusers seek treatment affects the population of
abusers more than the success rate of treatment itself.
| [
{
"created": "Fri, 26 Jul 2013 19:26:55 GMT",
"version": "v1"
}
] | 2013-08-19 | [
[
"Caldwell",
"Wendy K.",
""
],
[
"Freedman",
"Benjamin",
""
],
[
"Settles",
"Luke",
""
],
[
"Thomas",
"Michael M.",
""
],
[
"Murillo",
"Anarina",
""
],
[
"Camacho",
"Erika",
""
],
[
"Wirkus",
"Stephen",
""
]
] | Vicodin is the most commonly prescribed pain reliever in the United States. Research indicates that there are two million people who are currently abusing Vicodin, and the majority of those who abuse Vicodin were initially exposed to it via prescription. Our goal is to determine the most effective strategies for reducing the overall population of Vicodin abusers. More specifically, we focus on whether prevention methods aimed at educating doctors and patients on the potential for drug abuse or treatment methods implemented after a person abuses Vicodin will have a greater overall impact. We consider one linear and two non-linear compartmental models in which medical users of Vicodin can transition into the abuser compartment or leave the population by no longer taking the drug. Once Vicodin abusers, people can transition into a treatment compartment, with the possibility of leaving the population through successful completion of treatment or of relapsing and re-entering the abusive compartment. The linear model assumes no social interaction, while both non-linear models consider interaction. One considers interaction with abusers affecting the relapse rate, while the other assumes both this and an additional interaction between the number of abusers and the number of new prescriptions. Sensitivity analyses are conducted varying the rates of success of these intervention methods measured by the parameters to determine which strategy has the greatest impact on controlling the population of Vicodin abusers. From these models and analyses, we determine that manipulating parameters tied to prevention measures has a greater impact on reducing the population of abusers than manipulating parameters associated with treatment. We also note that increasing the rate at which abusers seek treatment affects the population of abusers more than the success rate of treatment itself. |
2309.11732 | Matthew Macaulay | Matthew Macaulay, Mathieu Fourment | Differentiable Phylogenetics via Hyperbolic Embeddings with Dodonaphy | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Motivation: Navigating the high dimensional space of discrete trees for
phylogenetics presents a challenging problem for tree optimisation. To address
this, hyperbolic embeddings of trees offer a promising approach to encoding
trees efficiently in continuous spaces. However, they require a differentiable
tree decoder to optimise the phylogenetic likelihood. We present soft-NJ, a
differentiable version of neighbour-joining that enables gradient-based
optimisation over the space of trees.
Results: We illustrate the potential for differentiable optimisation over
tree space for maximum likelihood inference. We then perform variational
Bayesian phylogenetics by optimising embedding distributions in hyperbolic
space. We compare the performance of this approximation technique on eight
benchmark datasets to state-of-art methods. However, geometric frustrations of
the embedding locations produce local optima that pose a challenge for
optimisation.
Availability: Dodonaphy is freely available on the web at
www.https://github.com/mattapow/dodonaphy. It includes an implementation of
soft-NJ.
| [
{
"created": "Thu, 21 Sep 2023 02:06:53 GMT",
"version": "v1"
}
] | 2023-09-22 | [
[
"Macaulay",
"Matthew",
""
],
[
"Fourment",
"Mathieu",
""
]
] | Motivation: Navigating the high dimensional space of discrete trees for phylogenetics presents a challenging problem for tree optimisation. To address this, hyperbolic embeddings of trees offer a promising approach to encoding trees efficiently in continuous spaces. However, they require a differentiable tree decoder to optimise the phylogenetic likelihood. We present soft-NJ, a differentiable version of neighbour-joining that enables gradient-based optimisation over the space of trees. Results: We illustrate the potential for differentiable optimisation over tree space for maximum likelihood inference. We then perform variational Bayesian phylogenetics by optimising embedding distributions in hyperbolic space. We compare the performance of this approximation technique on eight benchmark datasets to state-of-art methods. However, geometric frustrations of the embedding locations produce local optima that pose a challenge for optimisation. Availability: Dodonaphy is freely available on the web at www.https://github.com/mattapow/dodonaphy. It includes an implementation of soft-NJ. |
1401.5049 | Michael Deem | Pu Han, Liang Ren Niestemski, Jeffrey E. Barrick, and Michael W. Deem | Physical Model of the Immune Response of Bacteria Against Bacteriophage
Through the Adaptive CRISPR-Cas Immune System | 37 pages, 13 figures | Phys. Biol. 10 (2013) 025004 | 10.1088/1478-3975/10/2/025004 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bacteria and archaea have evolved an adaptive, heritable immune system that
recognizes and protects against viruses or plasmids. This system, known as the
CRISPR-Cas system, allows the host to recognize and incorporate short foreign
DNA or RNA sequences, called `spacers' into its CRISPR system. Spacers in the
CRISPR system provide a record of the history of bacteria and phage
coevolution. We use a physical model to study the dynamics of this coevolution
as it evolves stochastically over time. We focus on the impact of mutation and
recombination on bacteria and phage evolution and evasion. We discuss the
effect of different spacer deletion mechanisms on the coevolutionary dynamics.
We make predictions about bacteria and phage population growth, spacer
diversity within the CRISPR locus, and spacer protection against the phage
population.
| [
{
"created": "Mon, 20 Jan 2014 20:41:57 GMT",
"version": "v1"
}
] | 2014-01-21 | [
[
"Han",
"Pu",
""
],
[
"Niestemski",
"Liang Ren",
""
],
[
"Barrick",
"Jeffrey E.",
""
],
[
"Deem",
"Michael W.",
""
]
] | Bacteria and archaea have evolved an adaptive, heritable immune system that recognizes and protects against viruses or plasmids. This system, known as the CRISPR-Cas system, allows the host to recognize and incorporate short foreign DNA or RNA sequences, called `spacers' into its CRISPR system. Spacers in the CRISPR system provide a record of the history of bacteria and phage coevolution. We use a physical model to study the dynamics of this coevolution as it evolves stochastically over time. We focus on the impact of mutation and recombination on bacteria and phage evolution and evasion. We discuss the effect of different spacer deletion mechanisms on the coevolutionary dynamics. We make predictions about bacteria and phage population growth, spacer diversity within the CRISPR locus, and spacer protection against the phage population. |
1505.03176 | Cheng Ly | Cheng Ly | Firing Rate Dynamics in Recurrent Spiking Neural Networks with Intrinsic
and Network Heterogeneity | 21 pages, 5 figures | null | 10.1007/s10827-015-0578-0 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Heterogeneity of neural attributes has recently gained a lot of attention and
is increasing recognized as a crucial feature in neural processing. Despite its
importance, this physiological feature has traditionally been neglected in
theoretical studies of cortical neural networks. Thus, there is still a lot
unknown about the consequences of cellular and circuit heterogeneity in spiking
neural networks. In particular, combining network or synaptic heterogeneity and
intrinsic heterogeneity has yet to be considered systematically despite the
fact that both are known to exist and likely have significant roles in neural
network dynamics. In a canonical recurrent spiking neural network model, we
study how these two forms of heterogeneity lead to different distributions of
excitatory firing rates. To analytically characterize how these types of
heterogeneities affect the network, we employ a dimension reduction method that
relies on a combination of Monte Carlo simulations and probability density
function equations. We find that the relationship between intrinsic and network
heterogeneity has a strong effect on the overall level of heterogeneity of the
firing rates. Specifically, this relationship can lead to amplification or
attenuation of firing rate heterogeneity, and these effects depend on whether
the recurrent network is firing asynchronously or rhythmically firing. These
observations are captured with the aforementioned reduction method, and
furthermore simpler analytic descriptions based on this dimension reduction
method are developed. The final analytic descriptions provide compact and
descriptive formulas for how the relationship between intrinsic and network
heterogeneity determines the firing rate heterogeneity dynamics in various
settings.
| [
{
"created": "Tue, 12 May 2015 21:49:36 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Nov 2016 16:22:35 GMT",
"version": "v2"
}
] | 2016-11-22 | [
[
"Ly",
"Cheng",
""
]
] | Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings. |
1509.05767 | Mallenahalli Naresh Kumar Prof. Dr. | M.Naresh Kumar, M.V.R Seshasai, K.S Vara Prasad, V. Kamala, K.V
Ramana, R.S. Dwivedi, P.S. Roy | A new hybrid spectral similarity measure for discrimination of Vigna
species | null | International Journal of Remote Sensing, 32(14),4041-4053, 2011 | 10.1080/01431161.2010.484431 | null | q-bio.QM physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The reflectance spectrum of the species in a hyperspectral data can be
modelled as an n-dimensional vector. The spectral angle mapper computes the
angle between the vectors which is used to discriminate the species. The
spectral information divergence models the data as a probability distribution
so that the spectral variability between the bands can be extracted using the
stochastic measures. The hybrid approach of spectral angle mapper and spectral
information divergence is found to be better discriminator than spectral angle
mapper or spectral information divergence alone. The spectral correlation angle
is computed as a cosine of the angle of the Pearsonian correlation coefficient
between the vectors. The spectral correlation angle is a better measure than
the spectral angle mapper as it considers only standardized values of the
vectors rather than the absolute values of the vector. In the present paper a
new hybrid measure is proposed which is based on the spectral correlation angle
and the spectral information divergence. The proposed method has been compared
with the hybrid approach of spectral information divergence and spectral angle
mapper for discrimination of crops belonging to Vigna species using measures
like relative spectral discriminatory power, relative discriminatory
probability and relative discriminatory entropy in different spectral regions.
Experimental results using the laboratory spectra show that the proposed method
gives higher relative discriminatory power in 400nm-700nm spectral region.
| [
{
"created": "Fri, 18 Sep 2015 04:17:11 GMT",
"version": "v1"
}
] | 2015-09-30 | [
[
"Kumar",
"M. Naresh",
""
],
[
"Seshasai",
"M. V. R",
""
],
[
"Prasad",
"K. S Vara",
""
],
[
"Kamala",
"V.",
""
],
[
"Ramana",
"K. V",
""
],
[
"Dwivedi",
"R. S.",
""
],
[
"Roy",
"P. S.",
""
]
] | The reflectance spectrum of the species in a hyperspectral data can be modelled as an n-dimensional vector. The spectral angle mapper computes the angle between the vectors which is used to discriminate the species. The spectral information divergence models the data as a probability distribution so that the spectral variability between the bands can be extracted using the stochastic measures. The hybrid approach of spectral angle mapper and spectral information divergence is found to be better discriminator than spectral angle mapper or spectral information divergence alone. The spectral correlation angle is computed as a cosine of the angle of the Pearsonian correlation coefficient between the vectors. The spectral correlation angle is a better measure than the spectral angle mapper as it considers only standardized values of the vectors rather than the absolute values of the vector. In the present paper a new hybrid measure is proposed which is based on the spectral correlation angle and the spectral information divergence. The proposed method has been compared with the hybrid approach of spectral information divergence and spectral angle mapper for discrimination of crops belonging to Vigna species using measures like relative spectral discriminatory power, relative discriminatory probability and relative discriminatory entropy in different spectral regions. Experimental results using the laboratory spectra show that the proposed method gives higher relative discriminatory power in 400nm-700nm spectral region. |
q-bio/0508033 | Igor Volkov | Tommaso Zillio, Igor Volkov, Jayanth R. Banavar, Stephen P. Hubbell,
Amos Maritan | Spatial Scaling in Model Plant Communities | 10 pages, 3 figures | Phys. Rev. Lett. 95, 098101 (2005) | 10.1103/PhysRevLett.95.098101 | null | q-bio.PE q-bio.QM | null | We present an analytically tractable variant of the voter model that provides
a quantitatively accurate description of beta-diversity (two-point correlation
function) in two tropical forests. The model exhibits novel scaling behavior
that leads to links between ecological measures such as relative species
abundance and the species area relationship.
| [
{
"created": "Wed, 24 Aug 2005 02:16:48 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Zillio",
"Tommaso",
""
],
[
"Volkov",
"Igor",
""
],
[
"Banavar",
"Jayanth R.",
""
],
[
"Hubbell",
"Stephen P.",
""
],
[
"Maritan",
"Amos",
""
]
] | We present an analytically tractable variant of the voter model that provides a quantitatively accurate description of beta-diversity (two-point correlation function) in two tropical forests. The model exhibits novel scaling behavior that leads to links between ecological measures such as relative species abundance and the species area relationship. |
q-bio/0501010 | Philipp Messer W. | Philipp W. Messer, Peter F. Arndt, Michael L\"assig | A Solvable Sequence Evolution Model and Genomic Correlations | 4 pages, 4 figures | Physical Review Letters 94, 138103 (2005) | 10.1103/PhysRevLett.94.138103 | null | q-bio.GN cond-mat.stat-mech | null | We study a minimal model for genome evolution whose elementary processes are
single site mutation, duplication and deletion of sequence regions and
insertion of random segments. These processes are found to generate long-range
correlations in the composition of letters as long as the sequence length is
growing, i.e., the combined rates of duplications and insertions are higher
than the deletion rate. For constant sequence length, on the other hand, all
initial correlations decay exponentially. These results are obtained
analytically and by simulations. They are compared with the long-range
correlations observed in genomic DNA, and the implications for genome evolution
are discussed.
| [
{
"created": "Sun, 9 Jan 2005 20:37:57 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Messer",
"Philipp W.",
""
],
[
"Arndt",
"Peter F.",
""
],
[
"Lässig",
"Michael",
""
]
] | We study a minimal model for genome evolution whose elementary processes are single site mutation, duplication and deletion of sequence regions and insertion of random segments. These processes are found to generate long-range correlations in the composition of letters as long as the sequence length is growing, i.e., the combined rates of duplications and insertions are higher than the deletion rate. For constant sequence length, on the other hand, all initial correlations decay exponentially. These results are obtained analytically and by simulations. They are compared with the long-range correlations observed in genomic DNA, and the implications for genome evolution are discussed. |
1003.1601 | Cencini Massimo Dr. | Simone Pigolotti and Massimo Cencini | Coexistence and invasibility in a two-species competition model with
habitat-preference | 20 Pages, 6 Figures revised version accepted on J. Theor. Bio | J. Theor. Bio. 265 (2010) 609-617 | 10.1016/j.jtbi.2010.05.041 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The outcome of competition among species is influenced by the spatial
distribution of species and effects such as demographic stochasticity,
immigration fluxes, and the existence of preferred habitats. We introduce an
individual-based model describing the competition of two species and
incorporating all the above ingredients. We find that the presence of habitat
preference --- generating spatial niches --- strongly stabilizes the
coexistence of the two species. Eliminating habitat preference --- neutral
dynamics --- the model generates patterns, such as distribution of population
sizes, practically identical to those obtained in the presence of habitat
preference, provided an higher immigration rate is considered. Notwithstanding
the similarity in the population distribution, we show that invasibility
properties depend on habitat preference in a non-trivial way. In particular,
the neutral model results results more invasible or less invasible depending on
whether the comparison is made at equal immigration rate or at equal
distribution of population size, respectively. We discuss the relevance of
these results for the interpretation of invasibility experiments and the
species occupancy of preferred habitats.
| [
{
"created": "Mon, 8 Mar 2010 11:28:49 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Jun 2010 14:50:51 GMT",
"version": "v2"
}
] | 2010-07-19 | [
[
"Pigolotti",
"Simone",
""
],
[
"Cencini",
"Massimo",
""
]
] | The outcome of competition among species is influenced by the spatial distribution of species and effects such as demographic stochasticity, immigration fluxes, and the existence of preferred habitats. We introduce an individual-based model describing the competition of two species and incorporating all the above ingredients. We find that the presence of habitat preference --- generating spatial niches --- strongly stabilizes the coexistence of the two species. Eliminating habitat preference --- neutral dynamics --- the model generates patterns, such as distribution of population sizes, practically identical to those obtained in the presence of habitat preference, provided an higher immigration rate is considered. Notwithstanding the similarity in the population distribution, we show that invasibility properties depend on habitat preference in a non-trivial way. In particular, the neutral model results results more invasible or less invasible depending on whether the comparison is made at equal immigration rate or at equal distribution of population size, respectively. We discuss the relevance of these results for the interpretation of invasibility experiments and the species occupancy of preferred habitats. |
2312.08809 | Koujin Takeda | Yusuke Endo, Koujin Takeda | Performance evaluation of matrix factorization for fMRI data | 22 pages, 8 figures | Neural Computation (2024) 36 (1) 128-150 | 10.1162/neco_a_01628 | null | q-bio.NC cond-mat.dis-nn cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the study of the brain, there is a hypothesis that sparse coding is
realized in information representation of external stimuli, which is
experimentally confirmed for visual stimulus recently. However, unlike the
specific functional region in the brain, sparse coding in information
processing in the whole brain has not been clarified sufficiently. In this
study, we investigate the validity of sparse coding in the whole human brain by
applying various matrix factorization methods to functional magnetic resonance
imaging data of neural activities in the whole human brain. The result suggests
sparse coding hypothesis in information representation in the whole human
brain, because extracted features from sparse MF method, SparsePCA or MOD under
high sparsity setting, or approximate sparse MF method, FastICA, can classify
external visual stimuli more accurately than non-sparse MF method or sparse MF
method under low sparsity setting.
| [
{
"created": "Thu, 14 Dec 2023 10:48:50 GMT",
"version": "v1"
}
] | 2023-12-15 | [
[
"Endo",
"Yusuke",
""
],
[
"Takeda",
"Koujin",
""
]
] | In the study of the brain, there is a hypothesis that sparse coding is realized in information representation of external stimuli, which is experimentally confirmed for visual stimulus recently. However, unlike the specific functional region in the brain, sparse coding in information processing in the whole brain has not been clarified sufficiently. In this study, we investigate the validity of sparse coding in the whole human brain by applying various matrix factorization methods to functional magnetic resonance imaging data of neural activities in the whole human brain. The result suggests sparse coding hypothesis in information representation in the whole human brain, because extracted features from sparse MF method, SparsePCA or MOD under high sparsity setting, or approximate sparse MF method, FastICA, can classify external visual stimuli more accurately than non-sparse MF method or sparse MF method under low sparsity setting. |
1111.5297 | David Spivak | Tristan Giesa, David Spivak, Markus Buehler | Reoccurring patterns in hierarchical protein materials and music: The
power of analogies | 13 pages, 3 figures | T. Giesa, D.I. Spivak, M.J. Buehler. BioNanoScience: Volume 1,
Issue 4 (2011), Page 153-161 | 10.1007/s12668-011-0022-5 | null | q-bio.BM math.CT physics.soc-ph | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Complex hierarchical structures composed of simple nanoscale building blocks
form the basis of most biological materials. Here we demonstrate how analogies
between seemingly different fields enable the understanding of general
principles by which functional properties in hierarchical systems emerge,
similar to an analogy learning process. Specifically, natural hierarchical
materials like spider silk exhibit properties comparable to classical music in
terms of their hierarchical structure and function. As a comparative tool here
we apply hierarchical ontology logs (olog) that follow a rigorous mathematical
formulation based on category theory to provide an insightful system
representation by expressing knowledge in a conceptual map. We explain the
process of analogy creation, draw connections at several levels of hierarchy
and identify similar patterns that govern the structure of the hierarchical
systems silk and music and discuss the impact of the derived analogy for
nanotechnology.
| [
{
"created": "Tue, 22 Nov 2011 19:30:00 GMT",
"version": "v1"
}
] | 2011-11-23 | [
[
"Giesa",
"Tristan",
""
],
[
"Spivak",
"David",
""
],
[
"Buehler",
"Markus",
""
]
] | Complex hierarchical structures composed of simple nanoscale building blocks form the basis of most biological materials. Here we demonstrate how analogies between seemingly different fields enable the understanding of general principles by which functional properties in hierarchical systems emerge, similar to an analogy learning process. Specifically, natural hierarchical materials like spider silk exhibit properties comparable to classical music in terms of their hierarchical structure and function. As a comparative tool here we apply hierarchical ontology logs (olog) that follow a rigorous mathematical formulation based on category theory to provide an insightful system representation by expressing knowledge in a conceptual map. We explain the process of analogy creation, draw connections at several levels of hierarchy and identify similar patterns that govern the structure of the hierarchical systems silk and music and discuss the impact of the derived analogy for nanotechnology. |
1203.6061 | Alexander Peyser | Alexander Peyser and Wolfgang Nonner | Voltage sensing in ion channels: Mesoscale simulations of biological
devices | arXiv admin note: substantial text overlap with arXiv:1112.2994 | Phys. Rev. E 86, 011910 (2012) | 10.1103/PhysRevE.86.011910 | null | q-bio.BM physics.bio-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electrical signaling via voltage-gated ion channels depends upon the function
of a voltage sensor (VS), identified with the S1-S4 domain in voltage-gated K+
channels. Here we investigate some energetic aspects of the sliding-helix model
of the VS using simulations based on VS charges, linear dielectrics and
whole-body motion. Model electrostatics in voltage-clamped boundary conditions
are solved using a boundary element method. The statistical mechanical
consequences of the electrostatic configurational energy are computed to gain
insight into the sliding-helix mechanism and to predict experimentally measured
ensemble properties such as gating charge displaced by an applied voltage.
Those consequences and ensemble properties are investigated for two alternate
S4 configurations, \alpha- and 3(10)-helical. Both forms of VS are found to
have an inherent electrostatic stability. Maximal charge displacement is
limited by geometry, specifically the range of movement where S4 charges and
counter-charges overlap in the region of weak dielectric. Charge displacement
responds more steeply to voltage in the \alpha-helical than the 3(10)-helical
sensor. This difference is due to differences on the order of 0.1 eV in the
landscapes of electrostatic energy. As a step toward integrating these VS
models into a full-channel model, we include a hypothetical external load in
the Hamiltonian of the system and analyze the energetic in/output relation of
the VS.
| [
{
"created": "Tue, 27 Mar 2012 13:40:34 GMT",
"version": "v1"
},
{
"created": "Tue, 1 May 2012 23:08:48 GMT",
"version": "v2"
},
{
"created": "Mon, 4 Jun 2012 21:13:55 GMT",
"version": "v3"
},
{
"created": "Fri, 13 Jul 2012 08:46:21 GMT",
"version": "v4"
}
] | 2019-01-18 | [
[
"Peyser",
"Alexander",
""
],
[
"Nonner",
"Wolfgang",
""
]
] | Electrical signaling via voltage-gated ion channels depends upon the function of a voltage sensor (VS), identified with the S1-S4 domain in voltage-gated K+ channels. Here we investigate some energetic aspects of the sliding-helix model of the VS using simulations based on VS charges, linear dielectrics and whole-body motion. Model electrostatics in voltage-clamped boundary conditions are solved using a boundary element method. The statistical mechanical consequences of the electrostatic configurational energy are computed to gain insight into the sliding-helix mechanism and to predict experimentally measured ensemble properties such as gating charge displaced by an applied voltage. Those consequences and ensemble properties are investigated for two alternate S4 configurations, \alpha- and 3(10)-helical. Both forms of VS are found to have an inherent electrostatic stability. Maximal charge displacement is limited by geometry, specifically the range of movement where S4 charges and counter-charges overlap in the region of weak dielectric. Charge displacement responds more steeply to voltage in the \alpha-helical than the 3(10)-helical sensor. This difference is due to differences on the order of 0.1 eV in the landscapes of electrostatic energy. As a step toward integrating these VS models into a full-channel model, we include a hypothetical external load in the Hamiltonian of the system and analyze the energetic in/output relation of the VS. |
2112.07422 | Jordan Douglas | Jordan Douglas and David Welch | PEACH Tree: A Multiple Sequence Alignment and Tree Display Tool for
Epidemiologists | 4 pages, 1 figure, under review as an Applications Node | null | null | null | q-bio.QM q-bio.GN q-bio.PE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | PEACH Tree is an easy-to-use, online tool for displaying multiple sequence
alignments and phylogenetic trees side-by-side. PEACH Tree is powerful for
rapidly tracing evolutionary and transmission histories by filtering invariant
sites out of the display, and allowing samples to readily be filtered out of
the display. These features, coupled with the ability to display
epidemiological metadata, make the tool suitable for infectious disease
epidemiology. PEACH Tree further enables much needed communication between the
fields of genomics and infectious disease epidemiology, as exemplified by the
COVID-19 pandemic.
| [
{
"created": "Sun, 12 Dec 2021 22:51:22 GMT",
"version": "v1"
}
] | 2021-12-15 | [
[
"Douglas",
"Jordan",
""
],
[
"Welch",
"David",
""
]
] | PEACH Tree is an easy-to-use, online tool for displaying multiple sequence alignments and phylogenetic trees side-by-side. PEACH Tree is powerful for rapidly tracing evolutionary and transmission histories by filtering invariant sites out of the display, and allowing samples to readily be filtered out of the display. These features, coupled with the ability to display epidemiological metadata, make the tool suitable for infectious disease epidemiology. PEACH Tree further enables much needed communication between the fields of genomics and infectious disease epidemiology, as exemplified by the COVID-19 pandemic. |
0905.3353 | Vitaly Ganusov | Vitaly V. Ganusov, Jose Borghans, Rob De Boer | Explicit kinetic heterogeneity: mechanistic models for interpretation of
labeling data of heterogeneous cell populations | null | null | 10.1371/journal.pcbi.1000666 | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Estimation of division and death rates of lymphocytes in different conditions
is vital for quantitative understanding of the immune system. Deuterium, in the
form of deuterated glucose or heavy water, can be used to measure rates of
proliferation and death of lymphocytes in vivo. Inferring these rates from
labeling and delabeling curves has been subject to considerable debate with
different groups suggesting different mathematical models for that purpose. We
show that the three models that are most commonly used are in fact
mathematically identical and differ only in their interpretation of the
estimated parameters. By extending these previous models, we here propose a
more mechanistic approach for the analysis of data from deuterium labeling
experiments. We construct a model of "kinetic heterogeneity" in which the total
cell population consists of many sub-populations with different rates of cell
turnover. In this model, for a given distribution of the rates of turnover, the
predicted fraction of labeled DNA accumulated and lost can be calculated. Our
model reproduces several previously made experimental observations, such as a
negative correlation between the length of the labeling period and the rate at
which labeled DNA is lost after label cessation. We demonstrate the reliability
of the new explicit kinetic heterogeneity model by applying it to artificially
generated datasets, and illustrate its usefulness by fitting experimental data.
In contrast to previous models, the explicit kinetic heterogeneity model 1)
provides a mechanistic way of interpreting labeling data; 2) allows for a
non-exponential loss of labeled cells during delabeling, and 3) can be used to
describe data with variable labeling length.
| [
{
"created": "Wed, 20 May 2009 16:55:01 GMT",
"version": "v1"
}
] | 2015-05-13 | [
[
"Ganusov",
"Vitaly V.",
""
],
[
"Borghans",
"Jose",
""
],
[
"De Boer",
"Rob",
""
]
] | Estimation of division and death rates of lymphocytes in different conditions is vital for quantitative understanding of the immune system. Deuterium, in the form of deuterated glucose or heavy water, can be used to measure rates of proliferation and death of lymphocytes in vivo. Inferring these rates from labeling and delabeling curves has been subject to considerable debate with different groups suggesting different mathematical models for that purpose. We show that the three models that are most commonly used are in fact mathematically identical and differ only in their interpretation of the estimated parameters. By extending these previous models, we here propose a more mechanistic approach for the analysis of data from deuterium labeling experiments. We construct a model of "kinetic heterogeneity" in which the total cell population consists of many sub-populations with different rates of cell turnover. In this model, for a given distribution of the rates of turnover, the predicted fraction of labeled DNA accumulated and lost can be calculated. Our model reproduces several previously made experimental observations, such as a negative correlation between the length of the labeling period and the rate at which labeled DNA is lost after label cessation. We demonstrate the reliability of the new explicit kinetic heterogeneity model by applying it to artificially generated datasets, and illustrate its usefulness by fitting experimental data. In contrast to previous models, the explicit kinetic heterogeneity model 1) provides a mechanistic way of interpreting labeling data; 2) allows for a non-exponential loss of labeled cells during delabeling, and 3) can be used to describe data with variable labeling length. |
2010.10862 | Mark Leake | Jack W Shepherd, Sarah Lecinski, Jasmine Wragg, Sviatlana Shashkova,
Chris MacDonald, Mark C Leake | Molecular crowding in single eukaryotic cells: using cell environment
biosensing and single-molecule optical microscopy to probe dependence on
extracellular ionic strength, local glucose conditions, and sensor copy
number | null | null | null | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The physical and chemical environment inside cells is of fundamental
importance to all life but has traditionally been difficult to determine on a
subcellular basis. Here we combine cutting-edge genomically integrated FRET
biosensing to readout localized molecular crowding in single live yeast cells.
Confocal microscopy allows us to build subcellular crowding heatmaps using
ratiometric FRET, while whole-cell analysis demonstrates crowding is reduced
when yeast is grown in elevated glucose concentrations. Simulations indicate
that the cell membrane is largely inaccessible to these sensors and that
cytosolic crowding is broadly uniform across each cell over a timescale of
seconds. Millisecond single-molecule optical microscopy was used to track
molecules and obtain brightness estimates that enabled calculation of crowding
sensor copy numbers. The quantification of diffusing molecule trajectories
paves the way for correlating subcellular processes and the physicochemical
environment of cells under stress.
| [
{
"created": "Wed, 21 Oct 2020 09:45:31 GMT",
"version": "v1"
}
] | 2020-10-22 | [
[
"Shepherd",
"Jack W",
""
],
[
"Lecinski",
"Sarah",
""
],
[
"Wragg",
"Jasmine",
""
],
[
"Shashkova",
"Sviatlana",
""
],
[
"MacDonald",
"Chris",
""
],
[
"Leake",
"Mark C",
""
]
] | The physical and chemical environment inside cells is of fundamental importance to all life but has traditionally been difficult to determine on a subcellular basis. Here we combine cutting-edge genomically integrated FRET biosensing to readout localized molecular crowding in single live yeast cells. Confocal microscopy allows us to build subcellular crowding heatmaps using ratiometric FRET, while whole-cell analysis demonstrates crowding is reduced when yeast is grown in elevated glucose concentrations. Simulations indicate that the cell membrane is largely inaccessible to these sensors and that cytosolic crowding is broadly uniform across each cell over a timescale of seconds. Millisecond single-molecule optical microscopy was used to track molecules and obtain brightness estimates that enabled calculation of crowding sensor copy numbers. The quantification of diffusing molecule trajectories paves the way for correlating subcellular processes and the physicochemical environment of cells under stress. |
1912.06786 | Swathi Tej | Swathi Tej and Sutapa Mukherji | Small RNA driven feed-forward loop: Fine-tuning of protein synthesis
through sRNA mediated cross-talk | 14 pages,9 figures | null | null | null | q-bio.MN cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Often in bacterial regulatory networks, small noncoding RNAs (sRNA) interact
with several mRNA species. The competition among mRNAs for binding to the same
pool of sRNA might lead to crosstalk between the mRNAs. This is similar to the
competing endogenous RNA (ceRNA) effect wherein the competition to bind to the
same pool of miRNA in Eukaryotes leads to miRNA mediated crosstalk resulting in
subtle and complex gene regulation with stabilised gene expression. We study an
sRNA-driven feedforward loop (sFFL) where the top-tier regulator, an sRNA
(RprA), translationally activates the target protein (RicI) directly and also,
indirectly, via up-regulation of its transcriptional activator (RpoS/sigma^s).
We show that the sRNA-mediated crosstalk between the two mRNA species leads to
maximum target protein synthesis for low synthesis rates of RpoS-mRNA. This
indicates the possibility of an optimal target protein synthesis with efficient
utilisation of RpoS-mRNA which is typically associated with various other
stress response activities inside the cell. Since gene expression is inherently
stochastic due to the probabilistic nature of various molecular interactions
associated with it, we next quantify the fluctuations in the target protein
level using generating function-based approach and stochastic simulations. The
coefficient of variation that provides a measure of fluctuations in the
concentration shows a minimum under conditions that also correspond to optimal
target protein synthesis. This prompts us to conclude that, in sFFL, the
crosstalk leads to optimal target protein synthesis with minimal noise and
efficient utilisation of RpoS-mRNA.
| [
{
"created": "Sat, 14 Dec 2019 05:23:45 GMT",
"version": "v1"
}
] | 2019-12-17 | [
[
"Tej",
"Swathi",
""
],
[
"Mukherji",
"Sutapa",
""
]
] | Often in bacterial regulatory networks, small noncoding RNAs (sRNA) interact with several mRNA species. The competition among mRNAs for binding to the same pool of sRNA might lead to crosstalk between the mRNAs. This is similar to the competing endogenous RNA (ceRNA) effect wherein the competition to bind to the same pool of miRNA in Eukaryotes leads to miRNA mediated crosstalk resulting in subtle and complex gene regulation with stabilised gene expression. We study an sRNA-driven feedforward loop (sFFL) where the top-tier regulator, an sRNA (RprA), translationally activates the target protein (RicI) directly and also, indirectly, via up-regulation of its transcriptional activator (RpoS/sigma^s). We show that the sRNA-mediated crosstalk between the two mRNA species leads to maximum target protein synthesis for low synthesis rates of RpoS-mRNA. This indicates the possibility of an optimal target protein synthesis with efficient utilisation of RpoS-mRNA which is typically associated with various other stress response activities inside the cell. Since gene expression is inherently stochastic due to the probabilistic nature of various molecular interactions associated with it, we next quantify the fluctuations in the target protein level using generating function-based approach and stochastic simulations. The coefficient of variation that provides a measure of fluctuations in the concentration shows a minimum under conditions that also correspond to optimal target protein synthesis. This prompts us to conclude that, in sFFL, the crosstalk leads to optimal target protein synthesis with minimal noise and efficient utilisation of RpoS-mRNA. |
1211.1143 | Mariusz Pietruszka PhD | Mariusz Pietruszka | Frustration-induced inherent instability and growth oscillations in
pollen tubes | 35 pages, 7 figures | null | 10.1371/journal.pone.0075803 | null | q-bio.CB cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a seed plant a pollen tube is a vessel that transports male gamete cells
to an ovule to achieve fertilization. It consists of one elongated cell, which
exhibits growth oscillations, until it bursts completing its function. Up till
now, the mechanism behind the periodic character of the growth has not been
fully understood. An attempt to understand these oscillations lead us to an
attractive scenario: We show that the mechanism of pressure-induced symmetry
frustration occuring in the wall at the perimeter of cylindrical and
approximately hemispherical parts of a growing pollen cell, together with the
addition of cell wall material, suffices to release and sustain mechanical
self-oscillations and cell extension in pollen tubes. At the transition zone
where symmetry frustration occurs and one cannot distinguish either of the
involved symmetries, a kind of 'entangled state' appears where either single or
both symmetry(ies) can be realized by the system. We anticipate that
testifiable predictions made by the model may deliver, after calibration, a new
tool to estimate turgor pressure from oscillation period of the growing cell.
Since the mechanical principles apply to all turgor regulated walled cells
including those of plant, fungal and bacterial origin, the relevance of this
work is not limited to the case of the pollen tube.
| [
{
"created": "Tue, 6 Nov 2012 08:44:02 GMT",
"version": "v1"
},
{
"created": "Mon, 31 Dec 2012 10:42:30 GMT",
"version": "v2"
}
] | 2014-03-05 | [
[
"Pietruszka",
"Mariusz",
""
]
] | In a seed plant a pollen tube is a vessel that transports male gamete cells to an ovule to achieve fertilization. It consists of one elongated cell, which exhibits growth oscillations, until it bursts completing its function. Up till now, the mechanism behind the periodic character of the growth has not been fully understood. An attempt to understand these oscillations lead us to an attractive scenario: We show that the mechanism of pressure-induced symmetry frustration occuring in the wall at the perimeter of cylindrical and approximately hemispherical parts of a growing pollen cell, together with the addition of cell wall material, suffices to release and sustain mechanical self-oscillations and cell extension in pollen tubes. At the transition zone where symmetry frustration occurs and one cannot distinguish either of the involved symmetries, a kind of 'entangled state' appears where either single or both symmetry(ies) can be realized by the system. We anticipate that testifiable predictions made by the model may deliver, after calibration, a new tool to estimate turgor pressure from oscillation period of the growing cell. Since the mechanical principles apply to all turgor regulated walled cells including those of plant, fungal and bacterial origin, the relevance of this work is not limited to the case of the pollen tube. |
2404.05762 | Fatma Zahra Abdeldjouad | Fatma Zahra Abdeldjouad, Menaouer Brahami, Mohammed Sabri | Evaluating the Effectiveness of Artificial Intelligence in Predicting
Adverse Drug Reactions among Cancer Patients: A Systematic Review and
Meta-Analysis | Paper has been accepted at the IEEE Challenges and Innovations on TIC
(IEEE I2CIT) International Conference | null | null | null | q-bio.QM cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adverse drug reactions considerably impact patient outcomes and healthcare
costs in cancer therapy. Using artificial intelligence to predict adverse drug
reactions in real time could revolutionize oncology treatment. This study aims
to assess the performance of artificial intelligence models in predicting
adverse drug reactions in patients with cancer. This is the first systematic
review and meta-analysis. Scopus, PubMed, IEEE Xplore, and ACM Digital Library
databases were searched for studies in English, French, and Arabic from January
1, 2018, to August 20, 2023. The inclusion criteria were: (1) peer-reviewed
research articles; (2) use of artificial intelligence algorithms (machine
learning, deep learning, knowledge graphs); (3) study aimed to predict adverse
drug reactions (cardiotoxicity, neutropenia, nephrotoxicity, hepatotoxicity);
(4) study was on cancer patients. The data were extracted and evaluated by
three reviewers for study quality. Of the 332 screened articles, 17 studies
(5%) involving 93,248 oncology patients from 17 countries were included in the
systematic review, of which ten studies synthesized the meta-analysis. A
random-effects model was created to pool the sensitivity, specificity, and AUC
of the included studies. The pooled results were 0.82 (95% CI:0.69, 0.9), 0.84
(95% CI:0.75, 0.9), and 0.83 (95% CI:0.77, 0.87) for sensitivity, specificity,
and AUC, respectively, of ADR predictive models. Biomarkers proved their
effectiveness in predicting ADRs, yet they were adopted by only half of the
reviewed studies. The use of AI in cancer treatment shows great potential, with
models demonstrating high specificity and sensitivity in predicting ADRs.
However, standardized research and multicenter studies are needed to improve
the quality of evidence. AI can enhance cancer patient care by bridging the gap
between data-driven insights and clinical expertise.
| [
{
"created": "Sat, 6 Apr 2024 11:20:28 GMT",
"version": "v1"
}
] | 2024-04-10 | [
[
"Abdeldjouad",
"Fatma Zahra",
""
],
[
"Brahami",
"Menaouer",
""
],
[
"Sabri",
"Mohammed",
""
]
] | Adverse drug reactions considerably impact patient outcomes and healthcare costs in cancer therapy. Using artificial intelligence to predict adverse drug reactions in real time could revolutionize oncology treatment. This study aims to assess the performance of artificial intelligence models in predicting adverse drug reactions in patients with cancer. This is the first systematic review and meta-analysis. Scopus, PubMed, IEEE Xplore, and ACM Digital Library databases were searched for studies in English, French, and Arabic from January 1, 2018, to August 20, 2023. The inclusion criteria were: (1) peer-reviewed research articles; (2) use of artificial intelligence algorithms (machine learning, deep learning, knowledge graphs); (3) study aimed to predict adverse drug reactions (cardiotoxicity, neutropenia, nephrotoxicity, hepatotoxicity); (4) study was on cancer patients. The data were extracted and evaluated by three reviewers for study quality. Of the 332 screened articles, 17 studies (5%) involving 93,248 oncology patients from 17 countries were included in the systematic review, of which ten studies synthesized the meta-analysis. A random-effects model was created to pool the sensitivity, specificity, and AUC of the included studies. The pooled results were 0.82 (95% CI:0.69, 0.9), 0.84 (95% CI:0.75, 0.9), and 0.83 (95% CI:0.77, 0.87) for sensitivity, specificity, and AUC, respectively, of ADR predictive models. Biomarkers proved their effectiveness in predicting ADRs, yet they were adopted by only half of the reviewed studies. The use of AI in cancer treatment shows great potential, with models demonstrating high specificity and sensitivity in predicting ADRs. However, standardized research and multicenter studies are needed to improve the quality of evidence. AI can enhance cancer patient care by bridging the gap between data-driven insights and clinical expertise. |
2408.05789 | Yujiang Wang | Jonathan Horsley, Yujiang Wang, Callum Simpson, Vyte Janiukstyte,
Karoline Leiberg, Beth Little, Jane de Tisi, John Duncan, Peter N. Taylor | Status epilepticus and thinning of the entorhinal cortex | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Status epilepticus (SE) carries risks of morbidity and mortality.
Experimental studies have implicated the entorhinal cortex in prolonged
seizures; however, studies in large human cohorts are limited. We hypothesised
that individuals with temporal lobe epilepsy (TLE) and a history of SE would
have more severe entorhinal atrophy compared to others with TLE and no history
of SE.
357 individuals with drug resistant temporal lobe epilepsy (TLE) and 100
healthy controls were scanned on a 3T MRI. For all subjects the cortex was
segmented, parcellated, and the thickness calculated from the T1-weighted
anatomical scan. Subcortical volumes were derived similarly. Cohen's d and
Wilcoxon rank-sum tests respectively were used to capture effect sizes and
significance.
Individuals with TLE and SE had reduced entorhinal thickness compared to
those with TLE and no history of SE. The entorhinal cortex was more atrophic
ipsilaterally (d=0.51, p<0.001) than contralaterally (d=0.37, p=0.01).
Reductions in ipsilateral entorhinal thickness were present in both left TLE
(n=22:176, d=0.78, p<0.001), and right TLE (n=19:140, d=0.31, p=0.04), albeit
with a smaller effect size in right TLE. Several other regions exhibited
atrophy in individuals with TLE, but these did not relate to a history of SE.
These findings suggest potential involvement or susceptibility of the
entorhinal cortex in prolonged seizures.
| [
{
"created": "Sun, 11 Aug 2024 14:41:56 GMT",
"version": "v1"
}
] | 2024-08-13 | [
[
"Horsley",
"Jonathan",
""
],
[
"Wang",
"Yujiang",
""
],
[
"Simpson",
"Callum",
""
],
[
"Janiukstyte",
"Vyte",
""
],
[
"Leiberg",
"Karoline",
""
],
[
"Little",
"Beth",
""
],
[
"de Tisi",
"Jane",
""
],
[
"Duncan",
"John",
""
],
[
"Taylor",
"Peter N.",
""
]
] | Status epilepticus (SE) carries risks of morbidity and mortality. Experimental studies have implicated the entorhinal cortex in prolonged seizures; however, studies in large human cohorts are limited. We hypothesised that individuals with temporal lobe epilepsy (TLE) and a history of SE would have more severe entorhinal atrophy compared to others with TLE and no history of SE. 357 individuals with drug resistant temporal lobe epilepsy (TLE) and 100 healthy controls were scanned on a 3T MRI. For all subjects the cortex was segmented, parcellated, and the thickness calculated from the T1-weighted anatomical scan. Subcortical volumes were derived similarly. Cohen's d and Wilcoxon rank-sum tests respectively were used to capture effect sizes and significance. Individuals with TLE and SE had reduced entorhinal thickness compared to those with TLE and no history of SE. The entorhinal cortex was more atrophic ipsilaterally (d=0.51, p<0.001) than contralaterally (d=0.37, p=0.01). Reductions in ipsilateral entorhinal thickness were present in both left TLE (n=22:176, d=0.78, p<0.001), and right TLE (n=19:140, d=0.31, p=0.04), albeit with a smaller effect size in right TLE. Several other regions exhibited atrophy in individuals with TLE, but these did not relate to a history of SE. These findings suggest potential involvement or susceptibility of the entorhinal cortex in prolonged seizures. |
2006.02127 | Ajitesh Srivastava | Ajitesh Srivastava and Viktor K. Prasanna | Data-driven Identification of Number of Unreported Cases for COVID-19:
Bounds and Limitations | Fixed a typo | null | null | null | q-bio.PE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate forecasts for COVID-19 are necessary for better preparedness and
resource management. Specifically, deciding the response over months or several
months requires accurate long-term forecasts which is particularly challenging
as the model errors accumulate with time. A critical factor that can hinder
accurate long-term forecasts, is the number of unreported/asymptomatic cases.
While there have been early serology tests to estimate this number, more tests
need to be conducted for more reliable results. To identify the number of
unreported/asymptomatic cases, we take an epidemiology data-driven approach. We
show that we can identify lower bounds on this ratio or upper bound on actual
cases as a factor of reported cases. To do so, we propose an extension of our
prior heterogeneous infection rate model, incorporating unreported/asymptomatic
cases. We prove that the number of unreported cases can be reliably estimated
only from a certain time period of the epidemic data. In doing so, we construct
an algorithm called Fixed Infection Rate method, which identifies a reliable
bound on the learned ratio. We also propose two heuristics to learn this ratio
and show their effectiveness on simulated data. We use our approaches to
identify the upper bounds on the ratio of actual to reported cases for New York
City and several US states. Our results demonstrate with high confidence that
the actual number of cases cannot be more than 35 times in New York, 40 times
in Illinois, 38 times in Massachusetts and 29 times in New Jersey, than the
reported cases.
| [
{
"created": "Wed, 3 Jun 2020 09:39:50 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Jun 2020 04:23:57 GMT",
"version": "v2"
},
{
"created": "Wed, 10 Jun 2020 21:38:38 GMT",
"version": "v3"
},
{
"created": "Mon, 15 Jun 2020 00:04:07 GMT",
"version": "v4"
},
{
"created": "Thu, 9 Jul 2020 04:17:27 GMT",
"version": "v5"
}
] | 2020-07-10 | [
[
"Srivastava",
"Ajitesh",
""
],
[
"Prasanna",
"Viktor K.",
""
]
] | Accurate forecasts for COVID-19 are necessary for better preparedness and resource management. Specifically, deciding the response over months or several months requires accurate long-term forecasts which is particularly challenging as the model errors accumulate with time. A critical factor that can hinder accurate long-term forecasts, is the number of unreported/asymptomatic cases. While there have been early serology tests to estimate this number, more tests need to be conducted for more reliable results. To identify the number of unreported/asymptomatic cases, we take an epidemiology data-driven approach. We show that we can identify lower bounds on this ratio or upper bound on actual cases as a factor of reported cases. To do so, we propose an extension of our prior heterogeneous infection rate model, incorporating unreported/asymptomatic cases. We prove that the number of unreported cases can be reliably estimated only from a certain time period of the epidemic data. In doing so, we construct an algorithm called Fixed Infection Rate method, which identifies a reliable bound on the learned ratio. We also propose two heuristics to learn this ratio and show their effectiveness on simulated data. We use our approaches to identify the upper bounds on the ratio of actual to reported cases for New York City and several US states. Our results demonstrate with high confidence that the actual number of cases cannot be more than 35 times in New York, 40 times in Illinois, 38 times in Massachusetts and 29 times in New Jersey, than the reported cases. |
1606.07553 | Sakuntala Chatterjee | Raj Kumar Sadhu and Sakuntala Chatterjee | Actin filaments growing against a barrier with fluctuating shape | null | Physical Review E, vol. 93, 062414 (2016) | 10.1103/PhysRevE.93.062414 | null | q-bio.SC cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study force generation by a set of parallel actin filaments growing
against a non-rigid obstacle, in presence of an external load. The filaments
polymerize by either moving the whole obstacle, with a large energy cost, or by
causing local distortion in its shape which costs much less energy. The
non-rigid obstacle also has local thermal fluctuations due to which its shape
can change with time and we describe this using fluctuations in the height
profile of a one dimensional interface with Kardar-Parisi-Zhang dynamics. We
find the shape fluctuations of the barrier strongly affects the force
generation mechanism. The qualitative nature of the force-velocity curve is
crucially determined by the relative time-scale of filament and barrier
dynamics. The height profile of the barrier also shows interesting variation
with the external load. Our analytical calculation within mean-field theory
shows reasonable agreement with our simulation results.
| [
{
"created": "Fri, 24 Jun 2016 03:33:09 GMT",
"version": "v1"
}
] | 2016-06-27 | [
[
"Sadhu",
"Raj Kumar",
""
],
[
"Chatterjee",
"Sakuntala",
""
]
] | We study force generation by a set of parallel actin filaments growing against a non-rigid obstacle, in presence of an external load. The filaments polymerize by either moving the whole obstacle, with a large energy cost, or by causing local distortion in its shape which costs much less energy. The non-rigid obstacle also has local thermal fluctuations due to which its shape can change with time and we describe this using fluctuations in the height profile of a one dimensional interface with Kardar-Parisi-Zhang dynamics. We find the shape fluctuations of the barrier strongly affects the force generation mechanism. The qualitative nature of the force-velocity curve is crucially determined by the relative time-scale of filament and barrier dynamics. The height profile of the barrier also shows interesting variation with the external load. Our analytical calculation within mean-field theory shows reasonable agreement with our simulation results. |
1412.7384 | Peng Yang | Peng Yang, Xiaoquan Su, Le Ou-Yang, Hon-Nian Chua, Xiao-Li Li, Kang
Ning | Microbial community pattern detection in human body habitats via
ensemble clustering framework | BMC Systems Biology 2014 | BMC Systems Biology 2014, 8(Suppl 4):S7 | 10.1186/1752-0509-8-S4-S7 | null | q-bio.QM cs.CE cs.LG q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The human habitat is a host where microbial species evolve, function, and
continue to evolve. Elucidating how microbial communities respond to human
habitats is a fundamental and critical task, as establishing baselines of human
microbiome is essential in understanding its role in human disease and health.
However, current studies usually overlook a complex and interconnected
landscape of human microbiome and limit the ability in particular body habitats
with learning models of specific criterion. Therefore, these methods could not
capture the real-world underlying microbial patterns effectively. To obtain a
comprehensive view, we propose a novel ensemble clustering framework to mine
the structure of microbial community pattern on large-scale metagenomic data.
Particularly, we first build a microbial similarity network via integrating
1920 metagenomic samples from three body habitats of healthy adults. Then a
novel symmetric Nonnegative Matrix Factorization (NMF) based ensemble model is
proposed and applied onto the network to detect clustering pattern. Extensive
experiments are conducted to evaluate the effectiveness of our model on
deriving microbial community with respect to body habitat and host gender. From
clustering results, we observed that body habitat exhibits a strong bound but
non-unique microbial structural patterns. Meanwhile, human microbiome reveals
different degree of structural variations over body habitat and host gender. In
summary, our ensemble clustering framework could efficiently explore integrated
clustering results to accurately identify microbial communities, and provide a
comprehensive view for a set of microbial communities. Such trends depict an
integrated biography of microbial communities, which offer a new insight
towards uncovering pathogenic model of human microbiome.
| [
{
"created": "Sun, 21 Dec 2014 12:52:45 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Dec 2014 02:03:08 GMT",
"version": "v2"
},
{
"created": "Sun, 4 Jan 2015 05:37:42 GMT",
"version": "v3"
}
] | 2015-01-06 | [
[
"Yang",
"Peng",
""
],
[
"Su",
"Xiaoquan",
""
],
[
"Ou-Yang",
"Le",
""
],
[
"Chua",
"Hon-Nian",
""
],
[
"Li",
"Xiao-Li",
""
],
[
"Ning",
"Kang",
""
]
] | The human habitat is a host where microbial species evolve, function, and continue to evolve. Elucidating how microbial communities respond to human habitats is a fundamental and critical task, as establishing baselines of human microbiome is essential in understanding its role in human disease and health. However, current studies usually overlook a complex and interconnected landscape of human microbiome and limit the ability in particular body habitats with learning models of specific criterion. Therefore, these methods could not capture the real-world underlying microbial patterns effectively. To obtain a comprehensive view, we propose a novel ensemble clustering framework to mine the structure of microbial community pattern on large-scale metagenomic data. Particularly, we first build a microbial similarity network via integrating 1920 metagenomic samples from three body habitats of healthy adults. Then a novel symmetric Nonnegative Matrix Factorization (NMF) based ensemble model is proposed and applied onto the network to detect clustering pattern. Extensive experiments are conducted to evaluate the effectiveness of our model on deriving microbial community with respect to body habitat and host gender. From clustering results, we observed that body habitat exhibits a strong bound but non-unique microbial structural patterns. Meanwhile, human microbiome reveals different degree of structural variations over body habitat and host gender. In summary, our ensemble clustering framework could efficiently explore integrated clustering results to accurately identify microbial communities, and provide a comprehensive view for a set of microbial communities. Such trends depict an integrated biography of microbial communities, which offer a new insight towards uncovering pathogenic model of human microbiome. |
2005.08740 | Alkesh Yadav | Alkesh Yadav, Quentin Vagne, Pierre Sens, Garud Iyengar and Madan Rao | Glycan processing in the Golgi -- optimal information coding and
constraints on cisternal number and enzyme specificity | 30 pages | eLife 2022 | 10.7554/eLife.76757 | 11:e76757 | q-bio.SC cond-mat.stat-mech physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | Many proteins that undergo sequential enzymatic modification in the Golgi
cisternae are displayed at the plasma membrane as cell identity markers. The
modified proteins, called glycans, represent a molecular code. The fidelity of
this glycan code is measured by how accurately the glycan synthesis machinery
realises the desired target glycan distribution for a particular cell type and
niche. In this paper, we quantitatively analyse the tradeoffs between the
number of cisternae and the number and specificity of enzymes, in order to
synthesize a prescribed target glycan distribution of a certain complexity. We
find that to synthesize complex distributions, such as those observed in real
cells, one needs to have multiple cisternae and precise enzyme partitioning in
the Golgi. Additionally, for fixed number of enzymes and cisternae, there is an
optimal level of specificity of enzymes that achieves the target distribution
with high fidelity. Our results show how the complexity of the target glycan
distribution places functional constraints on the Golgi cisternal number and
enzyme specificity.
| [
{
"created": "Mon, 18 May 2020 14:07:41 GMT",
"version": "v1"
}
] | 2022-03-30 | [
[
"Yadav",
"Alkesh",
""
],
[
"Vagne",
"Quentin",
""
],
[
"Sens",
"Pierre",
""
],
[
"Iyengar",
"Garud",
""
],
[
"Rao",
"Madan",
""
]
] | Many proteins that undergo sequential enzymatic modification in the Golgi cisternae are displayed at the plasma membrane as cell identity markers. The modified proteins, called glycans, represent a molecular code. The fidelity of this glycan code is measured by how accurately the glycan synthesis machinery realises the desired target glycan distribution for a particular cell type and niche. In this paper, we quantitatively analyse the tradeoffs between the number of cisternae and the number and specificity of enzymes, in order to synthesize a prescribed target glycan distribution of a certain complexity. We find that to synthesize complex distributions, such as those observed in real cells, one needs to have multiple cisternae and precise enzyme partitioning in the Golgi. Additionally, for fixed number of enzymes and cisternae, there is an optimal level of specificity of enzymes that achieves the target distribution with high fidelity. Our results show how the complexity of the target glycan distribution places functional constraints on the Golgi cisternal number and enzyme specificity. |
2103.11637 | Coralie Picoche | Coralie Picoche, Fr\'ed\'eric Barraquand | Seed banks can help to maintain the diversity of interacting
phytoplankton species | 46 pages ; 13 figures | Journal of Theoretical Biology (2022) | 10.1016/j.jtbi.2022.111020 | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Seed formation is part of the reproductive cycle, leading to the accumulation
of resistance stages that can withstand harsh environmental conditions for long
periods of time. At the community level, multiple species with such
long-lasting life stages can be more likely to coexist. While the implications
of this process for biodiversity have been studied in terrestrial plants, seed
banks are usually neglected in phytoplankton multispecies dynamic models, in
spite of widespread empirical evidence for such seed banks. In this study, we
build a metacommunity model of interacting phytoplankton species, including a
resting stage supplying the seed bank. The model is parameterized with
empirically-driven growth rate functions and field-based interaction estimates,
which include both facilitative and competitive interactions. Exchanges between
compartments (coastal pelagic cells, coastal resting cells on the seabed, and
open ocean pelagic cells) are controlled by hydrodynamical parameters to which
the sensitivity of the model is assessed. We consider two models, i.e., with
and without a saturating effect of the interactions on the growth rates. Our
results are consistent between models, and show that a seed bank allows to
maintain all species in the community over 30 years. Indeed, a fraction of the
species are vulnerable to extinction at specific times within the year, but
this process is buffered by their survival in their resting stage. We thus
highlight the potential role of the seed bank in the recurrent re-invasion of
the coastal community, and of coastal environments in re-seeding oceanic
regions. Moreover, the seed bank enables populations to tolerate stronger
interactions within the community as well as more severe changes to the
environment, such as those predicted in a climate change context. Our study
therefore shows how resting stages may help phytoplanktonic diversity
maintenance.
| [
{
"created": "Mon, 22 Mar 2021 07:56:18 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Feb 2022 14:50:47 GMT",
"version": "v2"
}
] | 2022-02-03 | [
[
"Picoche",
"Coralie",
""
],
[
"Barraquand",
"Frédéric",
""
]
] | Seed formation is part of the reproductive cycle, leading to the accumulation of resistance stages that can withstand harsh environmental conditions for long periods of time. At the community level, multiple species with such long-lasting life stages can be more likely to coexist. While the implications of this process for biodiversity have been studied in terrestrial plants, seed banks are usually neglected in phytoplankton multispecies dynamic models, in spite of widespread empirical evidence for such seed banks. In this study, we build a metacommunity model of interacting phytoplankton species, including a resting stage supplying the seed bank. The model is parameterized with empirically-driven growth rate functions and field-based interaction estimates, which include both facilitative and competitive interactions. Exchanges between compartments (coastal pelagic cells, coastal resting cells on the seabed, and open ocean pelagic cells) are controlled by hydrodynamical parameters to which the sensitivity of the model is assessed. We consider two models, i.e., with and without a saturating effect of the interactions on the growth rates. Our results are consistent between models, and show that a seed bank allows to maintain all species in the community over 30 years. Indeed, a fraction of the species are vulnerable to extinction at specific times within the year, but this process is buffered by their survival in their resting stage. We thus highlight the potential role of the seed bank in the recurrent re-invasion of the coastal community, and of coastal environments in re-seeding oceanic regions. Moreover, the seed bank enables populations to tolerate stronger interactions within the community as well as more severe changes to the environment, such as those predicted in a climate change context. Our study therefore shows how resting stages may help phytoplanktonic diversity maintenance. |
2007.10518 | Mike Steel Prof. | Stuart Kauffman and Mike Steel | The expected number of viable autocatalytic sets in chemical reaction
systems | 16 pages, 1 figure. This version provides more detailed mathematical
proofs, a revised figure, and the correction of some minor typos | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The emergence of self-sustaining autocatalytic networks in chemical reaction
systems has been studied as a possible mechanism for modelling how living
systems first arose. It has been known for several decades that such networks
will form within systems of polymers (under cleavage and ligation reactions)
under a simple process of random catalysis, and this process has since been
mathematically analysed. In this paper, we provide an exact expression for the
expected number of self-sustaining autocatalytic networks that will form in a
general chemical reaction system, and the expected number of these networks
that will also be uninhibited (by some molecule produced by the system). Using
these equations, we are able to describe the patterns of catalysis and
inhibition that maximise or minimise the expected number of such networks. We
apply our results to derive a general theorem concerning the trade-off between
catalysis and inhibition, and to provide some insight into the extent to which
the expected number of self-sustaining autocatalytic networks coincides with
the probability that at least one such system is present.
| [
{
"created": "Mon, 20 Jul 2020 22:35:32 GMT",
"version": "v1"
},
{
"created": "Sun, 22 Nov 2020 03:49:11 GMT",
"version": "v2"
}
] | 2020-11-24 | [
[
"Kauffman",
"Stuart",
""
],
[
"Steel",
"Mike",
""
]
] | The emergence of self-sustaining autocatalytic networks in chemical reaction systems has been studied as a possible mechanism for modelling how living systems first arose. It has been known for several decades that such networks will form within systems of polymers (under cleavage and ligation reactions) under a simple process of random catalysis, and this process has since been mathematically analysed. In this paper, we provide an exact expression for the expected number of self-sustaining autocatalytic networks that will form in a general chemical reaction system, and the expected number of these networks that will also be uninhibited (by some molecule produced by the system). Using these equations, we are able to describe the patterns of catalysis and inhibition that maximise or minimise the expected number of such networks. We apply our results to derive a general theorem concerning the trade-off between catalysis and inhibition, and to provide some insight into the extent to which the expected number of self-sustaining autocatalytic networks coincides with the probability that at least one such system is present. |
2004.00058 | Larissa Albantakis | Larissa Albantakis, Francesco Massari, Maggie Beheler-Amass and Giulio
Tononi | A macro agent and its actions | 18 pages, 5 figures; to appear as a chapter in "Top-Down Causation
and Emergence" published by Springer as part of the Synthese Library Book
Series; F.M. and M.B. contributed equally to this work | null | null | null | q-bio.NC cs.AI cs.NE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In science, macro level descriptions of the causal interactions within
complex, dynamical systems are typically deemed convenient, but ultimately
reducible to a complete causal account of the underlying micro constituents.
Yet, such a reductionist perspective is hard to square with several issues
related to autonomy and agency: (1) agents require (causal) borders that
separate them from the environment, (2) at least in a biological context,
agents are associated with macroscopic systems, and (3) agents are supposed to
act upon their environment. Integrated information theory (IIT) (Oizumi et al.,
2014) offers a quantitative account of causation based on a set of causal
principles, including notions such as causal specificity, composition, and
irreducibility, that challenges the reductionist perspective in multiple ways.
First, the IIT formalism provides a complete account of a system's causal
structure, including irreducible higher-order mechanisms constituted of
multiple system elements. Second, a system's amount of integrated information
($\Phi$) measures the causal constraints a system exerts onto itself and can
peak at a macro level of description (Hoel et al., 2016; Marshall et al.,
2018). Finally, the causal principles of IIT can also be employed to identify
and quantify the actual causes of events ("what caused what"), such as an
agent's actions (Albantakis et al., 2019). Here, we demonstrate this framework
by example of a simulated agent, equipped with a small neural network, that
forms a maximum of $\Phi$ at a macro scale.
| [
{
"created": "Tue, 31 Mar 2020 18:51:18 GMT",
"version": "v1"
}
] | 2020-04-02 | [
[
"Albantakis",
"Larissa",
""
],
[
"Massari",
"Francesco",
""
],
[
"Beheler-Amass",
"Maggie",
""
],
[
"Tononi",
"Giulio",
""
]
] | In science, macro level descriptions of the causal interactions within complex, dynamical systems are typically deemed convenient, but ultimately reducible to a complete causal account of the underlying micro constituents. Yet, such a reductionist perspective is hard to square with several issues related to autonomy and agency: (1) agents require (causal) borders that separate them from the environment, (2) at least in a biological context, agents are associated with macroscopic systems, and (3) agents are supposed to act upon their environment. Integrated information theory (IIT) (Oizumi et al., 2014) offers a quantitative account of causation based on a set of causal principles, including notions such as causal specificity, composition, and irreducibility, that challenges the reductionist perspective in multiple ways. First, the IIT formalism provides a complete account of a system's causal structure, including irreducible higher-order mechanisms constituted of multiple system elements. Second, a system's amount of integrated information ($\Phi$) measures the causal constraints a system exerts onto itself and can peak at a macro level of description (Hoel et al., 2016; Marshall et al., 2018). Finally, the causal principles of IIT can also be employed to identify and quantify the actual causes of events ("what caused what"), such as an agent's actions (Albantakis et al., 2019). Here, we demonstrate this framework by example of a simulated agent, equipped with a small neural network, that forms a maximum of $\Phi$ at a macro scale. |
1402.5996 | Po T. Wang | Po T. Wang, Christine E. King, Andrew Schombs, Jack J. Lin, Mona
Sazgar, Frank P. K. Hsu, Susan J. Shaw, David E. Millett, Charles Y. Liu,
Luis A. Chui, Zoran Nenadic, An H. Do | Electrocorticogram encoding of upper extremity movement trajectories | Preliminary report. We have not completed full analyses | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electrocorticogram (ECoG)-based brain computer interfaces (BCI) can
potentially control upper extremity prostheses to restore independent function
to paralyzed individuals. However, current research is mostly restricted to the
offline decoding of finger or 2D arm movement trajectories, and these results
are modest. This study seeks to improve the fundamental understanding of the
ECoG signal features underlying upper extremity movements to guide better BCI
design. Subjects undergoing ECoG electrode implantation performed a series of
elementary upper extremity movements in an intermittent flexion and extension
manner. It was found that movement velocity, $\dot\theta$, had a high positive
(negative) correlation with the instantaneous power of the ECoG high-$\gamma$
band (80-160 Hz) during flexion (extension). Also, the correlation was low
during idling epochs. Visual inspection of the ECoG high-$\gamma$ band revealed
power bursts during flexion/extension events that have a waveform that strongly
resembles the corresponding flexion/extension event as seen on $\dot\theta$.
These high-$\gamma$ bursts were present in all elementary movements, and were
spatially distributed in a somatotopic fashion. Thus, it can be concluded that
the high-$\gamma$ power of ECoG strongly encodes for movement trajectories, and
can be used as an input feature in future BCIs.
| [
{
"created": "Wed, 19 Feb 2014 20:43:35 GMT",
"version": "v1"
}
] | 2014-02-26 | [
[
"Wang",
"Po T.",
""
],
[
"King",
"Christine E.",
""
],
[
"Schombs",
"Andrew",
""
],
[
"Lin",
"Jack J.",
""
],
[
"Sazgar",
"Mona",
""
],
[
"Hsu",
"Frank P. K.",
""
],
[
"Shaw",
"Susan J.",
""
],
[
"Millett",
"David E.",
""
],
[
"Liu",
"Charles Y.",
""
],
[
"Chui",
"Luis A.",
""
],
[
"Nenadic",
"Zoran",
""
],
[
"Do",
"An H.",
""
]
] | Electrocorticogram (ECoG)-based brain computer interfaces (BCI) can potentially control upper extremity prostheses to restore independent function to paralyzed individuals. However, current research is mostly restricted to the offline decoding of finger or 2D arm movement trajectories, and these results are modest. This study seeks to improve the fundamental understanding of the ECoG signal features underlying upper extremity movements to guide better BCI design. Subjects undergoing ECoG electrode implantation performed a series of elementary upper extremity movements in an intermittent flexion and extension manner. It was found that movement velocity, $\dot\theta$, had a high positive (negative) correlation with the instantaneous power of the ECoG high-$\gamma$ band (80-160 Hz) during flexion (extension). Also, the correlation was low during idling epochs. Visual inspection of the ECoG high-$\gamma$ band revealed power bursts during flexion/extension events that have a waveform that strongly resembles the corresponding flexion/extension event as seen on $\dot\theta$. These high-$\gamma$ bursts were present in all elementary movements, and were spatially distributed in a somatotopic fashion. Thus, it can be concluded that the high-$\gamma$ power of ECoG strongly encodes for movement trajectories, and can be used as an input feature in future BCIs. |
2106.15241 | Pawan Kumar | Pawan Kumar, Christina Surulescu, Anna Zhigun | Multiphase modelling of glioma pseudopalisading under acidosis | null | null | null | null | q-bio.TO math.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a multiphase modeling approach to describe glioma pseudopalisade
patterning under the influence of acidosis. The phases considered at the model
onset are glioma, normal tissue, necrotic matter, and interstitial fluid in a
void-free volume with acidity represented by proton concentration. We start
from mass and momentum balance to characterize the respective volume fractions
and deduce reaction-cross diffusion equations for the space-time evolution of
glioma, normal tissue, and necrosis. These are supplemented with a
reaction-diffusion equation for the acidity dynamics and lead to formation of
patterns which are typical for high grade gliomas. Unlike previous works, our
deduction also works in higher dimensions and involves less restrictions. We
also investigate the existence of weak solutions to the obtained system of
equations and perform numerical simulations to illustrate the solution behavior
and the pattern occurrence.
| [
{
"created": "Tue, 29 Jun 2021 10:52:28 GMT",
"version": "v1"
}
] | 2021-06-30 | [
[
"Kumar",
"Pawan",
""
],
[
"Surulescu",
"Christina",
""
],
[
"Zhigun",
"Anna",
""
]
] | We propose a multiphase modeling approach to describe glioma pseudopalisade patterning under the influence of acidosis. The phases considered at the model onset are glioma, normal tissue, necrotic matter, and interstitial fluid in a void-free volume with acidity represented by proton concentration. We start from mass and momentum balance to characterize the respective volume fractions and deduce reaction-cross diffusion equations for the space-time evolution of glioma, normal tissue, and necrosis. These are supplemented with a reaction-diffusion equation for the acidity dynamics and lead to formation of patterns which are typical for high grade gliomas. Unlike previous works, our deduction also works in higher dimensions and involves less restrictions. We also investigate the existence of weak solutions to the obtained system of equations and perform numerical simulations to illustrate the solution behavior and the pattern occurrence. |
1903.08511 | Fares Al-Shargie | Fares Al-Shargie | Early Detection of Mental Stress Using Advanced Neuroimaging and
Artificial Intelligence | 190 Pages, 67 Figure, 9 Tables | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While different neuroimaging modalities have been proposed to detect mental
stress, each modality experiences certain limitations. This study proposed
novel approaches to detect stress based on fusion of EEG and fNIRS signals in
the feature-level using joint independent component analysis (jICA) and
canonical correlation analysis method (CCA) and predected the level of stress
using machine-learning approach. The jICA and CCA were then developed to
combine the features to detect mental stress. The jICA fusion scheme discovers
relationships between modalities by utilizing ICA to identify sources from each
modality that modulate in the same way across subjects. The CCA fuse
information from two sets of features to discover the associations across
modalities and to ultimately estimate the sources responsible for these
associations. The study further explored the functional connectivity (FC) and
evaluated the performance of the fusion methods based on their classification
performance and compared it with the result obtained by each individual
modality. The jICA fusion technique significantly improved the classification
accuracy, sensitivity and specificity on average by +3.46% compared to the EEG
and +11.13% compared to the fNIRS. Similarly, CCA method improved the
classification accuracy, sensitivity and specificity on average by +8.56%
compared to the EEG and +13.03% compared to the fNIRS, respectively. The
overall performance of the proposed fusion methods significantly improved the
detection rate of mental stress, p<0.05. The FC significantly reduced under
stress and suggested EEG and fNIRS as a potential biomarker of stress.
| [
{
"created": "Wed, 20 Mar 2019 14:13:41 GMT",
"version": "v1"
}
] | 2019-03-21 | [
[
"Al-Shargie",
"Fares",
""
]
] | While different neuroimaging modalities have been proposed to detect mental stress, each modality experiences certain limitations. This study proposed novel approaches to detect stress based on fusion of EEG and fNIRS signals in the feature-level using joint independent component analysis (jICA) and canonical correlation analysis method (CCA) and predected the level of stress using machine-learning approach. The jICA and CCA were then developed to combine the features to detect mental stress. The jICA fusion scheme discovers relationships between modalities by utilizing ICA to identify sources from each modality that modulate in the same way across subjects. The CCA fuse information from two sets of features to discover the associations across modalities and to ultimately estimate the sources responsible for these associations. The study further explored the functional connectivity (FC) and evaluated the performance of the fusion methods based on their classification performance and compared it with the result obtained by each individual modality. The jICA fusion technique significantly improved the classification accuracy, sensitivity and specificity on average by +3.46% compared to the EEG and +11.13% compared to the fNIRS. Similarly, CCA method improved the classification accuracy, sensitivity and specificity on average by +8.56% compared to the EEG and +13.03% compared to the fNIRS, respectively. The overall performance of the proposed fusion methods significantly improved the detection rate of mental stress, p<0.05. The FC significantly reduced under stress and suggested EEG and fNIRS as a potential biomarker of stress. |
1708.09097 | Kazunori Yamada | Kazunori D Yamada | Optimizing scoring function of dynamic programming of pairwise profile
alignment using derivative free neural network | null | null | 10.1186/s13015-018-0123-6 | null | q-bio.QM cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A profile comparison method with position-specific scoring matrix (PSSM) is
one of the most accurate alignment methods. Currently, cosine similarity and
correlation coefficient are used as scoring functions of dynamic programming to
calculate similarity between PSSMs. However, it is unclear that these functions
are optimal for profile alignment methods. At least, by definition, these
functions cannot capture non-linear relationships between profiles. Therefore,
in this study, we attempted to discover a novel scoring function, which was
more suitable for the profile comparison method than the existing ones. Firstly
we implemented a new derivative free neural network by combining the
conventional neural network with evolutionary strategy optimization method.
Next, using the framework, the scoring function was optimized for aligning
remote sequence pairs. Nepal, the pairwise profile aligner with the novel
scoring function significantly improved both alignment sensitivity and
precision, compared to aligners with the existing functions. Nepal improved
alignment quality because of adaptation to remote sequence alignment and
increasing the expressive power of similarity score. The novel scoring function
can be realized using a simple matrix operation and easily incorporated into
other aligners. With our scoring function, the performance of homology
detection and/or multiple sequence alignment for remote homologous sequences
would be further improved.
| [
{
"created": "Wed, 30 Aug 2017 03:28:13 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Sep 2017 05:10:32 GMT",
"version": "v2"
}
] | 2018-02-20 | [
[
"Yamada",
"Kazunori D",
""
]
] | A profile comparison method with position-specific scoring matrix (PSSM) is one of the most accurate alignment methods. Currently, cosine similarity and correlation coefficient are used as scoring functions of dynamic programming to calculate similarity between PSSMs. However, it is unclear that these functions are optimal for profile alignment methods. At least, by definition, these functions cannot capture non-linear relationships between profiles. Therefore, in this study, we attempted to discover a novel scoring function, which was more suitable for the profile comparison method than the existing ones. Firstly we implemented a new derivative free neural network by combining the conventional neural network with evolutionary strategy optimization method. Next, using the framework, the scoring function was optimized for aligning remote sequence pairs. Nepal, the pairwise profile aligner with the novel scoring function significantly improved both alignment sensitivity and precision, compared to aligners with the existing functions. Nepal improved alignment quality because of adaptation to remote sequence alignment and increasing the expressive power of similarity score. The novel scoring function can be realized using a simple matrix operation and easily incorporated into other aligners. With our scoring function, the performance of homology detection and/or multiple sequence alignment for remote homologous sequences would be further improved. |
1805.00765 | Thierry Huillet | Thierry Huillet (LPTM) | Karlin-McGregor mutational occupancy problem revisited | to appear: Journal of Statistical Physics | null | 10.1007/s10955-018-2056-3 | null | q-bio.PE cond-mat.stat-mech math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Some population is made of n individuals that can be of p possible species
(or types). The update of the species abundance occupancies is from a Moran
mutational model designed by Karlin and McGregor in 1967. We first study the
equilibrium species counts as a function of n, p and the total mutation
probability $\nu$ before considering various asymptotic regimes on n, p and
$\nu$. Running title: KMG Model with Mutations.
| [
{
"created": "Wed, 2 May 2018 12:31:38 GMT",
"version": "v1"
}
] | 2018-05-23 | [
[
"Huillet",
"Thierry",
"",
"LPTM"
]
] | Some population is made of n individuals that can be of p possible species (or types). The update of the species abundance occupancies is from a Moran mutational model designed by Karlin and McGregor in 1967. We first study the equilibrium species counts as a function of n, p and the total mutation probability $\nu$ before considering various asymptotic regimes on n, p and $\nu$. Running title: KMG Model with Mutations. |
1308.3675 | Victor Suriel | Valerie Cheathon, Agustin Flores, Victor Suriel, Octavious Talbot,
Dustin Padilla, Marta Sarzynska, Adrian Smith, Luis Melara | Dynamics and Control of an Invasive Species: The Case of the Rasberry
Crazy Ant Colonies | null | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This project is motivated by the costs related with the documented risks of
the introduction of non-native invasive species of plants, animals, or
pathogens associated with travel and international trade. Such invasive species
often have no natural enemies in their new regions. The spatiotemporal dynamics
related to the invasion/spread of Nylanderia fulva, commonly known as the
Rasberry crazy ant, are explored via the use of models that focus on the
reproduction of ant colonies. A Cellular Automaton (CA) simulates the spatially
explicit spread of ants on a grid. The impact of local spatial correlations on
the dynamics of invasion is investigated numerically and analytically with the
aid of a Mean Field (MF) model and a Pair Approximation (PA) model, the latter
of which accounts for adjacent cell level efects. The PA model approach
considers the limited mobility range of N. fulva, that is, the grid cell
dynamics are not strongly in uenced by non-adjacent cells. The model determines
the rate of growth of colonies of N. fulva under distinct cell spatial
architecture. Numerical results and qualitative conclusions on the spread and
control of this invasive ant species are discussed.
| [
{
"created": "Fri, 26 Jul 2013 19:25:40 GMT",
"version": "v1"
}
] | 2013-08-19 | [
[
"Cheathon",
"Valerie",
""
],
[
"Flores",
"Agustin",
""
],
[
"Suriel",
"Victor",
""
],
[
"Talbot",
"Octavious",
""
],
[
"Padilla",
"Dustin",
""
],
[
"Sarzynska",
"Marta",
""
],
[
"Smith",
"Adrian",
""
],
[
"Melara",
"Luis",
""
]
] | This project is motivated by the costs related with the documented risks of the introduction of non-native invasive species of plants, animals, or pathogens associated with travel and international trade. Such invasive species often have no natural enemies in their new regions. The spatiotemporal dynamics related to the invasion/spread of Nylanderia fulva, commonly known as the Rasberry crazy ant, are explored via the use of models that focus on the reproduction of ant colonies. A Cellular Automaton (CA) simulates the spatially explicit spread of ants on a grid. The impact of local spatial correlations on the dynamics of invasion is investigated numerically and analytically with the aid of a Mean Field (MF) model and a Pair Approximation (PA) model, the latter of which accounts for adjacent cell level efects. The PA model approach considers the limited mobility range of N. fulva, that is, the grid cell dynamics are not strongly in uenced by non-adjacent cells. The model determines the rate of growth of colonies of N. fulva under distinct cell spatial architecture. Numerical results and qualitative conclusions on the spread and control of this invasive ant species are discussed. |
2303.12775 | Carsten Baldauf | Xiaojuan Hu, Kazi S. Amin, Markus Schneider, Carmay Lim, Dennis
Salahub, Carsten Baldauf | System-specific parameter optimization for non-polarizable and
polarizable force fields | 62 pages and 25 figures (including SI), manuscript to be submitted
soon | null | null | null | q-bio.BM cond-mat.soft physics.bio-ph | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The accuracy of classical force fields (FFs) has been shown to be limited for
the simulation of cation-protein systems despite their importance in
understanding the processes of life. Improvements can result from optimizing
the parameters of classical FFs or by extending the FF formulation by terms
describing charge transfer and polarization effects. In this work, we introduce
our implementation of the CTPOL model in OpenMM, which extends the classical
additive FF formula by adding charge transfer (CT) and polarization (POL).
Furthermore, we present an open-source parameterization tool, called FFAFFURR
that enables the (system specific) parameterization of OPLS-AA and CTPOL
models. The performance of our workflow was evaluated by its ability to
reproduce quantum chemistry energies and by molecular dynamics simulations of a
Zinc finger protein.
| [
{
"created": "Wed, 22 Mar 2023 17:42:01 GMT",
"version": "v1"
},
{
"created": "Mon, 9 Oct 2023 12:12:04 GMT",
"version": "v2"
}
] | 2023-10-10 | [
[
"Hu",
"Xiaojuan",
""
],
[
"Amin",
"Kazi S.",
""
],
[
"Schneider",
"Markus",
""
],
[
"Lim",
"Carmay",
""
],
[
"Salahub",
"Dennis",
""
],
[
"Baldauf",
"Carsten",
""
]
] | The accuracy of classical force fields (FFs) has been shown to be limited for the simulation of cation-protein systems despite their importance in understanding the processes of life. Improvements can result from optimizing the parameters of classical FFs or by extending the FF formulation by terms describing charge transfer and polarization effects. In this work, we introduce our implementation of the CTPOL model in OpenMM, which extends the classical additive FF formula by adding charge transfer (CT) and polarization (POL). Furthermore, we present an open-source parameterization tool, called FFAFFURR that enables the (system specific) parameterization of OPLS-AA and CTPOL models. The performance of our workflow was evaluated by its ability to reproduce quantum chemistry energies and by molecular dynamics simulations of a Zinc finger protein. |
q-bio/0507026 | Georgy Karev | Artem S. Novozhilov, Georgy P. Karev, and Eugene V. Koonin | Biological applications of the theory of birth-and-death processes | 29 pages, 4 figures; submitted to "Briefings in Bioinformatics" | null | null | null | q-bio.QM q-bio.GN | null | In this review, we discuss the applications of the theory of birth-and-death
processes to problems in biology, primarily, those of evolutionary genomics.
The mathematical principles of the theory of these processes are briefly
described. Birth-and-death processes, with some straightforward additions such
as innovation, are a simple, natural formal framework for modeling a vast
variety of biological processes such as population dynamics, speciation, genome
evolution, including growth of paralogous gene families and horizontal gene
transfer, and somatic evolution of cancers. We further describe how empirical
data, e.g., distributions of paralogous gene family size, can be used to choose
the model that best reflects the actual course of evolution among different
versions of birth-death-and-innovation models. It is concluded that
birth-and-death processes, thanks to their mathematical transparency,
flexibility and relevance to fundamental biological process, are going to be an
indispensable mathematical tool for the burgeoning field of systems biology.
| [
{
"created": "Fri, 15 Jul 2005 18:39:57 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Novozhilov",
"Artem S.",
""
],
[
"Karev",
"Georgy P.",
""
],
[
"Koonin",
"Eugene V.",
""
]
] | In this review, we discuss the applications of the theory of birth-and-death processes to problems in biology, primarily, those of evolutionary genomics. The mathematical principles of the theory of these processes are briefly described. Birth-and-death processes, with some straightforward additions such as innovation, are a simple, natural formal framework for modeling a vast variety of biological processes such as population dynamics, speciation, genome evolution, including growth of paralogous gene families and horizontal gene transfer, and somatic evolution of cancers. We further describe how empirical data, e.g., distributions of paralogous gene family size, can be used to choose the model that best reflects the actual course of evolution among different versions of birth-death-and-innovation models. It is concluded that birth-and-death processes, thanks to their mathematical transparency, flexibility and relevance to fundamental biological process, are going to be an indispensable mathematical tool for the burgeoning field of systems biology. |
1509.03942 | Davide Barbieri | Davide Barbieri | Geometry and dimensionality reduction of feature spaces in primary
visual cortex | null | null | 10.1117/12.2187026 | null | q-bio.NC cs.CV math.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Some geometric properties of the wavelet analysis performed by visual neurons
are discussed and compared with experimental data. In particular, several
relationships between the cortical morphologies and the parametric dependencies
of extracted features are formalized and considered from a harmonic analysis
point of view.
| [
{
"created": "Mon, 14 Sep 2015 03:21:54 GMT",
"version": "v1"
}
] | 2016-01-20 | [
[
"Barbieri",
"Davide",
""
]
] | Some geometric properties of the wavelet analysis performed by visual neurons are discussed and compared with experimental data. In particular, several relationships between the cortical morphologies and the parametric dependencies of extracted features are formalized and considered from a harmonic analysis point of view. |
1312.6356 | Peter Csermely | Peter Csermely, Janos Hodsagi, Tamas Korcsmaros, Dezso Modos, Aron R.
Perez-Lopez, Kristof Szalay, Daniel V. Veres, Katalin Lenti, Ling-Yun Wu and
Xiang-Sun Zhang | Cancer stem cells display extremely large evolvability: alternating
plastic and rigid networks as a potential mechanism | Subtitle: Network models, novel therapeutic target strategies and the
contributions of hypoxia, inflammation and cellular senescence; 10 pages, 4
Tables, 1 Figure and 127 references | Seminars in Cancer Biology 30 (2015) 42-51 | 10.1016/j.semcancer.2013.12.004 | null | q-bio.MN cond-mat.dis-nn physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cancer is increasingly perceived as a systems-level, network phenomenon. The
major trend of malignant transformation can be described as a two-phase
process, where an initial increase of network plasticity is followed by a
decrease of plasticity at late stages of tumor development. The fluctuating
intensity of stress factors, like hypoxia, inflammation and the either
cooperative or hostile interactions of tumor inter-cellular networks, all
increase the adaptation potential of cancer cells. This may lead to the bypass
of cellular senescence, and to the development of cancer stem cells. We propose
that the central tenet of cancer stem cell definition lies exactly in the
indefinability of cancer stem cells. Actual properties of cancer stem cells
depend on the individual "stress-history" of the given tumor. Cancer stem cells
are characterized by an extremely large evolvability (i.e. a capacity to
generate heritable phenotypic variation), which corresponds well with the
defining hallmarks of cancer stem cells: the possession of the capacity to
self-renew and to repeatedly re-build the heterogeneous lineages of cancer
cells that comprise a tumor in new environments. Cancer stem cells represent a
cell population, which is adapted to adapt. We argue that the high evolvability
of cancer stem cells is helped by their repeated transitions between plastic
(proliferative, symmetrically dividing) and rigid (quiescent, asymmetrically
dividing, often more invasive) phenotypes having plastic and rigid networks.
Thus, cancer stem cells reverse and replay cancer development multiple times.
We describe network models potentially explaining cancer stem cell-like
behavior. Finally, we propose novel strategies including combination therapies
and multi-target drugs to overcome the Nietzschean dilemma of cancer stem cell
targeting: "what does not kill me makes me stronger".
| [
{
"created": "Sun, 22 Dec 2013 10:14:00 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Dec 2014 18:05:44 GMT",
"version": "v2"
}
] | 2014-12-24 | [
[
"Csermely",
"Peter",
""
],
[
"Hodsagi",
"Janos",
""
],
[
"Korcsmaros",
"Tamas",
""
],
[
"Modos",
"Dezso",
""
],
[
"Perez-Lopez",
"Aron R.",
""
],
[
"Szalay",
"Kristof",
""
],
[
"Veres",
"Daniel V.",
""
],
[
"Lenti",
"Katalin",
""
],
[
"Wu",
"Ling-Yun",
""
],
[
"Zhang",
"Xiang-Sun",
""
]
] | Cancer is increasingly perceived as a systems-level, network phenomenon. The major trend of malignant transformation can be described as a two-phase process, where an initial increase of network plasticity is followed by a decrease of plasticity at late stages of tumor development. The fluctuating intensity of stress factors, like hypoxia, inflammation and the either cooperative or hostile interactions of tumor inter-cellular networks, all increase the adaptation potential of cancer cells. This may lead to the bypass of cellular senescence, and to the development of cancer stem cells. We propose that the central tenet of cancer stem cell definition lies exactly in the indefinability of cancer stem cells. Actual properties of cancer stem cells depend on the individual "stress-history" of the given tumor. Cancer stem cells are characterized by an extremely large evolvability (i.e. a capacity to generate heritable phenotypic variation), which corresponds well with the defining hallmarks of cancer stem cells: the possession of the capacity to self-renew and to repeatedly re-build the heterogeneous lineages of cancer cells that comprise a tumor in new environments. Cancer stem cells represent a cell population, which is adapted to adapt. We argue that the high evolvability of cancer stem cells is helped by their repeated transitions between plastic (proliferative, symmetrically dividing) and rigid (quiescent, asymmetrically dividing, often more invasive) phenotypes having plastic and rigid networks. Thus, cancer stem cells reverse and replay cancer development multiple times. We describe network models potentially explaining cancer stem cell-like behavior. Finally, we propose novel strategies including combination therapies and multi-target drugs to overcome the Nietzschean dilemma of cancer stem cell targeting: "what does not kill me makes me stronger". |
2106.15852 | Michael Oellermann | Michael Oellermann, Jolle W. Jolles, Diego Ortiz, Rui Seabra, Tobias
Wenzel, Hannah Wilson, Richelle Tanner | Harnessing the Benefits of Open Electronics in Science | 20 pages, 3 figure, 2 tables | null | 10.1093/icb/icac043 | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Freely and openly shared low-cost electronic applications, known as open
electronics, have sparked a new open-source movement, with much un-tapped
potential to advance scientific research. Initially designed to appeal to
electronic hobbyists, open electronics have formed a global community of
"makers" and inventors and are increasingly used in science and industry. Here,
we review the current benefits of open electronics for scientific research and
guide academics to enter this emerging field. We discuss how electronic
applications, from the experimental to the theoretical sciences, can help (I)
individual researchers by increasing the customization, efficiency, and
scalability of experiments, while improving data quantity and quality; (II)
scientific institutions by improving access and maintenance of high-end
technologies, visibility and interdisciplinary collaboration potential; and
(III) the scientific community by improving transparency and reproducibility,
helping decouple research capacity from funding, increasing innovation, and
improving collaboration potential among researchers and the public. Open
electronics are powerful tools to increase creativity, democratization, and
reproducibility of research and thus offer practical solutions to overcome
significant barriers in science.
| [
{
"created": "Wed, 30 Jun 2021 07:17:10 GMT",
"version": "v1"
}
] | 2022-07-14 | [
[
"Oellermann",
"Michael",
""
],
[
"Jolles",
"Jolle W.",
""
],
[
"Ortiz",
"Diego",
""
],
[
"Seabra",
"Rui",
""
],
[
"Wenzel",
"Tobias",
""
],
[
"Wilson",
"Hannah",
""
],
[
"Tanner",
"Richelle",
""
]
] | Freely and openly shared low-cost electronic applications, known as open electronics, have sparked a new open-source movement, with much un-tapped potential to advance scientific research. Initially designed to appeal to electronic hobbyists, open electronics have formed a global community of "makers" and inventors and are increasingly used in science and industry. Here, we review the current benefits of open electronics for scientific research and guide academics to enter this emerging field. We discuss how electronic applications, from the experimental to the theoretical sciences, can help (I) individual researchers by increasing the customization, efficiency, and scalability of experiments, while improving data quantity and quality; (II) scientific institutions by improving access and maintenance of high-end technologies, visibility and interdisciplinary collaboration potential; and (III) the scientific community by improving transparency and reproducibility, helping decouple research capacity from funding, increasing innovation, and improving collaboration potential among researchers and the public. Open electronics are powerful tools to increase creativity, democratization, and reproducibility of research and thus offer practical solutions to overcome significant barriers in science. |
q-bio/0611067 | Daniel G. M. Silvestre | Giovano de O. Cardozo, Daniel de A. M. M. Silvestre, Alexandre Colato | Periodical cicadas: a minimal automaton model | 13 pages, 6 figures, final version, conditionally accepted (Physica
A) | null | null | null | q-bio.PE | null | The Magicicada spp. life cycles with its prime periods and highly
synchronized emergence have defied reasonable scientific explanation since its
discovery. During the last decade several models and explanations for this
phenomenon appeared in the literature along with a great deal of discussion.
Despite this considerable effort, there is no final conclusion about this long
standing biological problem. Here, we construct a minimal automaton model
without predation/parasitism which reproduces some of these aspects. Our
results point towards competition between different strains with limited
dispersal threshold as the main factor leading to the emergence of prime
numbered life cycles.
| [
{
"created": "Mon, 20 Nov 2006 21:27:12 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Nov 2006 19:33:45 GMT",
"version": "v2"
},
{
"created": "Mon, 4 Dec 2006 18:13:19 GMT",
"version": "v3"
},
{
"created": "Tue, 12 Dec 2006 04:02:22 GMT",
"version": "v4"
},
{
"created": "Wed, 28 Mar 2007 12:18:50 GMT",
"version": "v5"
}
] | 2007-05-23 | [
[
"Cardozo",
"Giovano de O.",
""
],
[
"Silvestre",
"Daniel de A. M. M.",
""
],
[
"Colato",
"Alexandre",
""
]
] | The Magicicada spp. life cycles with its prime periods and highly synchronized emergence have defied reasonable scientific explanation since its discovery. During the last decade several models and explanations for this phenomenon appeared in the literature along with a great deal of discussion. Despite this considerable effort, there is no final conclusion about this long standing biological problem. Here, we construct a minimal automaton model without predation/parasitism which reproduces some of these aspects. Our results point towards competition between different strains with limited dispersal threshold as the main factor leading to the emergence of prime numbered life cycles. |
1507.04331 | Heather Harrington | Elizabeth Gross, Brent Davis, Kenneth L. Ho, Daniel J. Bates, Heather
A. Harrington | Numerical algebraic geometry for model selection and its application to
the life sciences | References added, additional clarifications | null | null | null | q-bio.QM math.AG math.NA q-bio.MN stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Researchers working with mathematical models are often confronted by the
related problems of parameter estimation, model validation, and model
selection. These are all optimization problems, well-known to be challenging
due to non-linearity, non-convexity and multiple local optima. Furthermore, the
challenges are compounded when only partial data is available. Here, we
consider polynomial models (e.g., mass-action chemical reaction networks at
steady state) and describe a framework for their analysis based on optimization
using numerical algebraic geometry. Specifically, we use probability-one
polynomial homotopy continuation methods to compute all critical points of the
objective function, then filter to recover the global optima. Our approach
exploits the geometric structures relating models and data, and we demonstrate
its utility on examples from cell signaling, synthetic biology, and
epidemiology.
| [
{
"created": "Wed, 15 Jul 2015 19:12:52 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Sep 2015 18:51:39 GMT",
"version": "v2"
},
{
"created": "Fri, 1 Apr 2016 19:29:19 GMT",
"version": "v3"
}
] | 2016-04-04 | [
[
"Gross",
"Elizabeth",
""
],
[
"Davis",
"Brent",
""
],
[
"Ho",
"Kenneth L.",
""
],
[
"Bates",
"Daniel J.",
""
],
[
"Harrington",
"Heather A.",
""
]
] | Researchers working with mathematical models are often confronted by the related problems of parameter estimation, model validation, and model selection. These are all optimization problems, well-known to be challenging due to non-linearity, non-convexity and multiple local optima. Furthermore, the challenges are compounded when only partial data is available. Here, we consider polynomial models (e.g., mass-action chemical reaction networks at steady state) and describe a framework for their analysis based on optimization using numerical algebraic geometry. Specifically, we use probability-one polynomial homotopy continuation methods to compute all critical points of the objective function, then filter to recover the global optima. Our approach exploits the geometric structures relating models and data, and we demonstrate its utility on examples from cell signaling, synthetic biology, and epidemiology. |
2108.00905 | Selina Baeza Loya | Selina Baeza Loya | Spike timing regularity in vestibular afferent neurons: How ionic
currents influence sensory encoding mechanisms | 6 pages, 1 figure | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-sa/4.0/ | Primary vestibular neurons are categorized as either regularly or irregularly
firing afferents that use rate and temporal sensory encoding strategies,
respectively. While many factors influence firing in these neurons, recent work
in mammalian vestibular afferents has demonstrated a rich diversity in ion
channels that drive spiking regularity. Here, I review key ionic currents
studied in vitro and demonstrate how they may enable sensory encoding
strategies demonstrated in vivo.
| [
{
"created": "Mon, 2 Aug 2021 13:49:48 GMT",
"version": "v1"
}
] | 2021-08-03 | [
[
"Loya",
"Selina Baeza",
""
]
] | Primary vestibular neurons are categorized as either regularly or irregularly firing afferents that use rate and temporal sensory encoding strategies, respectively. While many factors influence firing in these neurons, recent work in mammalian vestibular afferents has demonstrated a rich diversity in ion channels that drive spiking regularity. Here, I review key ionic currents studied in vitro and demonstrate how they may enable sensory encoding strategies demonstrated in vivo. |
1411.2237 | Nadav M. Shnerb | Efrat Seri and Nadav M. Shnerb | Spatial patterns in the tropical forest reveal connections between
negative feedback, aggregation and abundance | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The spatial arrangement of trees in a tropical forest reflects the interplay
between aggregating processes, like dispersal limitation, and negative feedback
that induces effective repulsion among individuals. Monitoring the
variance-mean ratio for conspecific individuals along length-scales, we show
that the effect of negative feedback is dominant at short scales, while
aggregation characterizes the large-scale patterns. A comparison of different
species indicates, surprisingly, that both aggregation and negative feedback
scales are related to the overall abundance of the species. This suggests a
bottom-up control mechanism, in which the negative feedback dictates the
dispersal kernel and the overall abundance.
| [
{
"created": "Sun, 9 Nov 2014 14:03:32 GMT",
"version": "v1"
}
] | 2014-11-11 | [
[
"Seri",
"Efrat",
""
],
[
"Shnerb",
"Nadav M.",
""
]
] | The spatial arrangement of trees in a tropical forest reflects the interplay between aggregating processes, like dispersal limitation, and negative feedback that induces effective repulsion among individuals. Monitoring the variance-mean ratio for conspecific individuals along length-scales, we show that the effect of negative feedback is dominant at short scales, while aggregation characterizes the large-scale patterns. A comparison of different species indicates, surprisingly, that both aggregation and negative feedback scales are related to the overall abundance of the species. This suggests a bottom-up control mechanism, in which the negative feedback dictates the dispersal kernel and the overall abundance. |
2304.10450 | Jiaxin Wang | Jiaxin Wang, Heidi J. Renninger, Qin Ma, Shichao Jin | StoManager1: An Enhanced, Automated, and High-throughput Tool to Measure
Leaf Stomata and Guard Cell Metrics Using Empirical and Theoretical
Algorithms | 15 pages, 6 figures, 3 tables | Plant Physiology, kiae049, 2024 | 10.1093/plphys/kiae049 | kiae049 | q-bio.TO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Automated stomata detection and measuring are vital for understanding plant
physiological performance and ecological functioning in global water and carbon
cycles. Current methods are laborious, time-consuming, prone to bias, and
limited in scale. We developed StoManager1, a high-throughput tool utilizing
empirical and theoretical algorithms and convolutional neural networks to
automatically detect, count, and measure over 30 stomatal and guard cell
metrics, including stomata and guard cell area, length, width, and orientation,
stomatal evenness, divergence, and aggregation index. These metrics, combined
with leaf functional traits, explained 78% and 93% of productivity and
intrinsic water use efficiency (iWUE) variances in hardwoods, making them
significant factors in leaf physiology and tree growth. StoManager1
demonstrates exceptional precision and recall (mAP@0.5 over 0.993), effectively
capturing diverse stomatal properties across various species.
StoManager1facilitates the automation of measuring leaf stomata, enabling
broader exploration of stomatal control in plant growth and adaptation to
environmental stress and climate change. This has implications for global gross
primary productivity (GPP) modeling and estimation, as integrating stomatal
metrics can enhance comprehension and predictions of plant growth and resource
usage worldwide. StoManager1's source code and an online demonstration are
available on GitHub (https://github.com/JiaxinWang123/StoManager.git), along
with a user-friendly Windows application on Zenodo
(https://doi.org/10.5281/zenodo.7686022).
| [
{
"created": "Thu, 20 Apr 2023 16:45:09 GMT",
"version": "v1"
},
{
"created": "Tue, 16 May 2023 20:52:22 GMT",
"version": "v2"
},
{
"created": "Thu, 25 May 2023 10:34:04 GMT",
"version": "v3"
}
] | 2024-04-04 | [
[
"Wang",
"Jiaxin",
""
],
[
"Renninger",
"Heidi J.",
""
],
[
"Ma",
"Qin",
""
],
[
"Jin",
"Shichao",
""
]
] | Automated stomata detection and measuring are vital for understanding plant physiological performance and ecological functioning in global water and carbon cycles. Current methods are laborious, time-consuming, prone to bias, and limited in scale. We developed StoManager1, a high-throughput tool utilizing empirical and theoretical algorithms and convolutional neural networks to automatically detect, count, and measure over 30 stomatal and guard cell metrics, including stomata and guard cell area, length, width, and orientation, stomatal evenness, divergence, and aggregation index. These metrics, combined with leaf functional traits, explained 78% and 93% of productivity and intrinsic water use efficiency (iWUE) variances in hardwoods, making them significant factors in leaf physiology and tree growth. StoManager1 demonstrates exceptional precision and recall (mAP@0.5 over 0.993), effectively capturing diverse stomatal properties across various species. StoManager1facilitates the automation of measuring leaf stomata, enabling broader exploration of stomatal control in plant growth and adaptation to environmental stress and climate change. This has implications for global gross primary productivity (GPP) modeling and estimation, as integrating stomatal metrics can enhance comprehension and predictions of plant growth and resource usage worldwide. StoManager1's source code and an online demonstration are available on GitHub (https://github.com/JiaxinWang123/StoManager.git), along with a user-friendly Windows application on Zenodo (https://doi.org/10.5281/zenodo.7686022). |
q-bio/0703043 | Mingzhe Liu | Ruili Wang, Rui Jiang, Mingzhe Liu, Jiming Liu and Qing-song Wu | Effects of Langmuir Kinetics of Two-Lane Totally Asymmetric Exclusion
Processes in Protein Traffic | 15 pages, 8 figures. To be published in IJMPC | null | 10.1142/S0129183107011479 | null | q-bio.QM q-bio.BM | null | In this paper, we study a two-lane totally asymmetric simple exclusion
process (TASEP) coupled with random attachment and detachment of particles
(Langmuir kinetics) in both lanes under open boundary conditions. Our model can
describe the directed motion of molecular motors, attachment and detachment of
motors, and free inter-lane transition of motors between filaments. In this
paper, we focus on some finite-size effects of the system because normally the
sizes of most real systems are finite and small (e.g., size $\leq 10,000$). A
special finite-size effect of the two-lane system has been observed, which is
that the density wall moves left first and then move towards the right with the
increase of the lane-changing rate. We called it the jumping effect. We find
that increasing attachment and detachment rates will weaken the jumping effect.
We also confirmed that when the size of the two-lane system is large enough,
the jumping effect disappears, and the two-lane system has a similar density
profile to a single-lane TASEP coupled with Langmuir kinetics. Increasing
lane-changing rates has little effect on density and current after the density
reaches maximum. Also, lane-changing rate has no effect on density profiles of
a two-lane TASEP coupled with Langmuir kinetics at a large
attachment/detachment rate and/or a large system size. Mean-field approximation
is presented and it agrees with our Monte Carlo simulations.
| [
{
"created": "Tue, 20 Mar 2007 09:10:33 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Wang",
"Ruili",
""
],
[
"Jiang",
"Rui",
""
],
[
"Liu",
"Mingzhe",
""
],
[
"Liu",
"Jiming",
""
],
[
"Wu",
"Qing-song",
""
]
] | In this paper, we study a two-lane totally asymmetric simple exclusion process (TASEP) coupled with random attachment and detachment of particles (Langmuir kinetics) in both lanes under open boundary conditions. Our model can describe the directed motion of molecular motors, attachment and detachment of motors, and free inter-lane transition of motors between filaments. In this paper, we focus on some finite-size effects of the system because normally the sizes of most real systems are finite and small (e.g., size $\leq 10,000$). A special finite-size effect of the two-lane system has been observed, which is that the density wall moves left first and then move towards the right with the increase of the lane-changing rate. We called it the jumping effect. We find that increasing attachment and detachment rates will weaken the jumping effect. We also confirmed that when the size of the two-lane system is large enough, the jumping effect disappears, and the two-lane system has a similar density profile to a single-lane TASEP coupled with Langmuir kinetics. Increasing lane-changing rates has little effect on density and current after the density reaches maximum. Also, lane-changing rate has no effect on density profiles of a two-lane TASEP coupled with Langmuir kinetics at a large attachment/detachment rate and/or a large system size. Mean-field approximation is presented and it agrees with our Monte Carlo simulations. |
1207.6013 | Irmtraud Meyer | Jeff R. Proctor and Irmtraud M. Meyer | CoFold: thermodynamic RNA structure prediction with a kinetic twist | 30 pages (11 figures) | null | null | null | q-bio.BM q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing state-of-the-art methods that take a single RNA sequence and predict
the corresponding RNA secondary-structure are thermodynamic methods. These
predict the most stable RNA structure, but do not consider the process of
structure formation. We have by now ample experimental and theoretical
evidence, however, that sequences in vivo fold while being transcribed and that
the process of structure formation matters. We here present a conceptually new
method for predicting RNA secondary-structure, called CoFold, that combines
thermodynamic with kinetic considerations. Our method significantly improves
the state-of-art in terms of prediction accuracy, especially for long sequences
of more than a thousand nucleotides length such as ribosomal RNAs.
| [
{
"created": "Wed, 25 Jul 2012 14:52:57 GMT",
"version": "v1"
}
] | 2012-07-26 | [
[
"Proctor",
"Jeff R.",
""
],
[
"Meyer",
"Irmtraud M.",
""
]
] | Existing state-of-the-art methods that take a single RNA sequence and predict the corresponding RNA secondary-structure are thermodynamic methods. These predict the most stable RNA structure, but do not consider the process of structure formation. We have by now ample experimental and theoretical evidence, however, that sequences in vivo fold while being transcribed and that the process of structure formation matters. We here present a conceptually new method for predicting RNA secondary-structure, called CoFold, that combines thermodynamic with kinetic considerations. Our method significantly improves the state-of-art in terms of prediction accuracy, especially for long sequences of more than a thousand nucleotides length such as ribosomal RNAs. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.