id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1706.04970 | Min Xu | Xiangrui Zeng, Miguel Ricardo Leung, Tzviya Zeev-Ben-Mordehai, Min Xu | A convolutional autoencoder approach for mining features in cellular
electron cryo-tomograms and weakly supervised coarse segmentation | Accepted by Journal of Structural Biology | null | 10.1016/j.jsb.2017.12.015 | null | q-bio.QM cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cellular electron cryo-tomography enables the 3D visualization of cellular
organization in the near-native state and at submolecular resolution. However,
the contents of cellular tomograms are often complex, making it difficult to
automatically isolate different in situ cellular components. In this paper, we
propose a convolutional autoencoder-based unsupervised approach to provide a
coarse grouping of 3D small subvolumes extracted from tomograms. We demonstrate
that the autoencoder can be used for efficient and coarse characterization of
features of macromolecular complexes and surfaces, such as membranes. In
addition, the autoencoder can be used to detect non-cellular features related
to sample preparation and data collection, such as carbon edges from the grid
and tomogram boundaries. The autoencoder is also able to detect patterns that
may indicate spatial interactions between cellular components. Furthermore, we
demonstrate that our autoencoder can be used for weakly supervised semantic
segmentation of cellular components, requiring a very small amount of manual
annotation.
| [
{
"created": "Thu, 15 Jun 2017 17:13:37 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Dec 2017 18:32:31 GMT",
"version": "v2"
}
] | 2017-12-29 | [
[
"Zeng",
"Xiangrui",
""
],
[
"Leung",
"Miguel Ricardo",
""
],
[
"Zeev-Ben-Mordehai",
"Tzviya",
""
],
[
"Xu",
"Min",
""
]
] | Cellular electron cryo-tomography enables the 3D visualization of cellular organization in the near-native state and at submolecular resolution. However, the contents of cellular tomograms are often complex, making it difficult to automatically isolate different in situ cellular components. In this paper, we propose a convolutional autoencoder-based unsupervised approach to provide a coarse grouping of 3D small subvolumes extracted from tomograms. We demonstrate that the autoencoder can be used for efficient and coarse characterization of features of macromolecular complexes and surfaces, such as membranes. In addition, the autoencoder can be used to detect non-cellular features related to sample preparation and data collection, such as carbon edges from the grid and tomogram boundaries. The autoencoder is also able to detect patterns that may indicate spatial interactions between cellular components. Furthermore, we demonstrate that our autoencoder can be used for weakly supervised semantic segmentation of cellular components, requiring a very small amount of manual annotation. |
1012.5343 | Ilya M. Nemenman | Jakub Otwinowski, Sorin Tanase-Nicola, and Ilya Nemenman | Speeding up evolutionary search by small fitness fluctuations | 12 pages, 5 figures | J Stat Phys 144 (2), 367-378, 2011 | 10.1007/s10955-011-0199-6 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a fixed size population that undergoes an evolutionary adaptation
in the weak mutuation rate limit, which we model as a biased Langevin process
in the genotype space. We show analytically and numerically that, if the
fitness landscape has a small highly epistatic (rough) and time-varying
component, then the population genotype exhibits a high effective diffusion in
the genotype space and is able to escape local fitness minima with a large
probability. We argue that our principal finding that even very small
time-dependent fluctuations of fitness can substantially speed up evolution is
valid for a wide class of models.
| [
{
"created": "Fri, 24 Dec 2010 05:49:53 GMT",
"version": "v1"
}
] | 2014-02-04 | [
[
"Otwinowski",
"Jakub",
""
],
[
"Tanase-Nicola",
"Sorin",
""
],
[
"Nemenman",
"Ilya",
""
]
] | We consider a fixed size population that undergoes an evolutionary adaptation in the weak mutuation rate limit, which we model as a biased Langevin process in the genotype space. We show analytically and numerically that, if the fitness landscape has a small highly epistatic (rough) and time-varying component, then the population genotype exhibits a high effective diffusion in the genotype space and is able to escape local fitness minima with a large probability. We argue that our principal finding that even very small time-dependent fluctuations of fitness can substantially speed up evolution is valid for a wide class of models. |
2010.02704 | Yue Wang | Yue Wang and Boyu Zhang and J\'er\'emie Kropp and Nadya Morozova | Inference on tissue transplantation experiments | null | Journal of Theoretical Biology, 520 (2021), 110645 | null | null | q-bio.QM q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We review studies on tissue transplantation experiments for various species:
one piece of the donor tissue is excised and transplanted into a slit in the
host tissue, then observe the behavior of this grafted tissue. Although we have
known the results of some transplantation experiments, there are many more
possible experiments with unknown results. We develop a penalty function-based
method that uses the known experimental results to infer the unknown
experimental results. Similar experiments without similar results get penalized
and correspond to smaller probability. This method can provide the most
probable results of a group of experiments or the probability of a specific
result for each experiment. This method is also generalized to other
situations. Besides, we solve a problem: how to design experiments so that such
a method can be applied most efficiently.
| [
{
"created": "Tue, 6 Oct 2020 13:26:57 GMT",
"version": "v1"
},
{
"created": "Fri, 23 Oct 2020 19:12:05 GMT",
"version": "v2"
},
{
"created": "Thu, 18 Feb 2021 11:27:22 GMT",
"version": "v3"
},
{
"created": "Tue, 1 Nov 2022 06:53:27 GMT",
"version": "v4"
}
] | 2022-11-02 | [
[
"Wang",
"Yue",
""
],
[
"Zhang",
"Boyu",
""
],
[
"Kropp",
"Jérémie",
""
],
[
"Morozova",
"Nadya",
""
]
] | We review studies on tissue transplantation experiments for various species: one piece of the donor tissue is excised and transplanted into a slit in the host tissue, then observe the behavior of this grafted tissue. Although we have known the results of some transplantation experiments, there are many more possible experiments with unknown results. We develop a penalty function-based method that uses the known experimental results to infer the unknown experimental results. Similar experiments without similar results get penalized and correspond to smaller probability. This method can provide the most probable results of a group of experiments or the probability of a specific result for each experiment. This method is also generalized to other situations. Besides, we solve a problem: how to design experiments so that such a method can be applied most efficiently. |
1809.03934 | Pranav Reddy | Pranav G. Reddy, Richard F. Betzel, Ankit N. Khambhati, Preya Shah,
Lohith Kini, Brian Litt, Thomas H. Lucas, Kathryn A. Davis, Danielle S.
Bassett | Genetic and Neuroanatomical Support for Functional Brain Network
Dynamics in Epilepsy | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Focal epilepsy is a devastating neurological disorder that affects an
overwhelming number of patients worldwide, many of whom prove resistant to
medication. The efficacy of current innovative technologies for the treatment
of these patients has been stalled by the lack of accurate and effective
methods to fuse multimodal neuroimaging data to map anatomical targets driving
seizure dynamics. Here we propose a parsimonious model that explains how
large-scale anatomical networks and shared genetic constraints shape
inter-regional communication in focal epilepsy. In extensive ECoG recordings
acquired from a group of patients with medically refractory focal-onset
epilepsy, we find that ictal and preictal functional brain network dynamics can
be accurately predicted from features of brain anatomy and geometry, patterns
of white matter connectivity, and constraints complicit in patterns of gene
coexpression, all of which are conserved across healthy adult populations.
Moreover, we uncover evidence that markers of non-conserved architecture,
potentially driven by idiosyncratic pathology of single subjects, are most
prevalent in high frequency ictal dynamics and low frequency preictal dynamics.
Finally, we find that ictal dynamics are better predicted by white matter
features and more poorly predicted by geometry and genetic constraints than
preictal dynamics, suggesting that the functional brain network dynamics
manifest in seizures rely on - and may directly propagate along - underlying
white matter structure that is largely conserved across humans. Broadly, our
work offers insights into the generic architectural principles of the human
brain that impact seizure dynamics, and could be extended to further our
understanding, models, and predictions of subject-level pathology and response
to intervention.
| [
{
"created": "Tue, 11 Sep 2018 14:37:44 GMT",
"version": "v1"
}
] | 2018-09-12 | [
[
"Reddy",
"Pranav G.",
""
],
[
"Betzel",
"Richard F.",
""
],
[
"Khambhati",
"Ankit N.",
""
],
[
"Shah",
"Preya",
""
],
[
"Kini",
"Lohith",
""
],
[
"Litt",
"Brian",
""
],
[
"Lucas",
"Thomas H.",
""
],
[
"Davis",
"Kathryn A.",
""
],
[
"Bassett",
"Danielle S.",
""
]
] | Focal epilepsy is a devastating neurological disorder that affects an overwhelming number of patients worldwide, many of whom prove resistant to medication. The efficacy of current innovative technologies for the treatment of these patients has been stalled by the lack of accurate and effective methods to fuse multimodal neuroimaging data to map anatomical targets driving seizure dynamics. Here we propose a parsimonious model that explains how large-scale anatomical networks and shared genetic constraints shape inter-regional communication in focal epilepsy. In extensive ECoG recordings acquired from a group of patients with medically refractory focal-onset epilepsy, we find that ictal and preictal functional brain network dynamics can be accurately predicted from features of brain anatomy and geometry, patterns of white matter connectivity, and constraints complicit in patterns of gene coexpression, all of which are conserved across healthy adult populations. Moreover, we uncover evidence that markers of non-conserved architecture, potentially driven by idiosyncratic pathology of single subjects, are most prevalent in high frequency ictal dynamics and low frequency preictal dynamics. Finally, we find that ictal dynamics are better predicted by white matter features and more poorly predicted by geometry and genetic constraints than preictal dynamics, suggesting that the functional brain network dynamics manifest in seizures rely on - and may directly propagate along - underlying white matter structure that is largely conserved across humans. Broadly, our work offers insights into the generic architectural principles of the human brain that impact seizure dynamics, and could be extended to further our understanding, models, and predictions of subject-level pathology and response to intervention. |
2308.05133 | Rohan Kumar Gupta | Rohan Kumar Gupta and Rohit Sinha | Analyzing the Effect of Data Impurity on the Detection Performances of
Mental Disorders | null | null | null | null | q-bio.NC cs.LG cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The primary method for identifying mental disorders automatically has
traditionally involved using binary classifiers. These classifiers are trained
using behavioral data obtained from an interview setup. In this training
process, data from individuals with the specific disorder under consideration
are categorized as the positive class, while data from all other participants
constitute the negative class. In practice, it is widely recognized that
certain mental disorders share similar symptoms, causing the collected
behavioral data to encompass a variety of attributes associated with multiple
disorders. Consequently, attributes linked to the targeted mental disorder
might also be present within the negative class. This data impurity may lead to
sub-optimal training of the classifier for a mental disorder of interest. In
this study, we investigate this hypothesis in the context of major depressive
disorder (MDD) and post-traumatic stress disorder detection (PTSD). The results
show that upon removal of such data impurity, MDD and PTSD detection
performances are significantly improved.
| [
{
"created": "Wed, 9 Aug 2023 13:13:26 GMT",
"version": "v1"
}
] | 2023-08-11 | [
[
"Gupta",
"Rohan Kumar",
""
],
[
"Sinha",
"Rohit",
""
]
] | The primary method for identifying mental disorders automatically has traditionally involved using binary classifiers. These classifiers are trained using behavioral data obtained from an interview setup. In this training process, data from individuals with the specific disorder under consideration are categorized as the positive class, while data from all other participants constitute the negative class. In practice, it is widely recognized that certain mental disorders share similar symptoms, causing the collected behavioral data to encompass a variety of attributes associated with multiple disorders. Consequently, attributes linked to the targeted mental disorder might also be present within the negative class. This data impurity may lead to sub-optimal training of the classifier for a mental disorder of interest. In this study, we investigate this hypothesis in the context of major depressive disorder (MDD) and post-traumatic stress disorder detection (PTSD). The results show that upon removal of such data impurity, MDD and PTSD detection performances are significantly improved. |
2308.10917 | Liangrui Pan | Liangrui Pan, Dazheng Liu, Zhichao Feng, Wenjuan Liu, Shaoliang Peng | PACS: Prediction and analysis of cancer subtypes from multi-omics data
based on a multi-head attention mechanism model | Submitted to BIBM2023 | null | null | null | q-bio.QM cs.LG cs.MM | http://creativecommons.org/licenses/by/4.0/ | Due to the high heterogeneity and clinical characteristics of cancer, there
are significant differences in multi-omic data and clinical characteristics
among different cancer subtypes. Therefore, accurate classification of cancer
subtypes can help doctors choose the most appropriate treatment options,
improve treatment outcomes, and provide more accurate patient survival
predictions. In this study, we propose a supervised multi-head attention
mechanism model (SMA) to classify cancer subtypes successfully. The attention
mechanism and feature sharing module of the SMA model can successfully learn
the global and local feature information of multi-omics data. Second, it
enriches the parameters of the model by deeply fusing multi-head attention
encoders from Siamese through the fusion module. Validated by extensive
experiments, the SMA model achieves the highest accuracy, F1 macroscopic, F1
weighted, and accurate classification of cancer subtypes in simulated,
single-cell, and cancer multiomics datasets compared to AE, CNN, and GNN-based
models. Therefore, we contribute to future research on multiomics data using
our attention-based approach.
| [
{
"created": "Mon, 21 Aug 2023 03:54:21 GMT",
"version": "v1"
}
] | 2023-08-23 | [
[
"Pan",
"Liangrui",
""
],
[
"Liu",
"Dazheng",
""
],
[
"Feng",
"Zhichao",
""
],
[
"Liu",
"Wenjuan",
""
],
[
"Peng",
"Shaoliang",
""
]
] | Due to the high heterogeneity and clinical characteristics of cancer, there are significant differences in multi-omic data and clinical characteristics among different cancer subtypes. Therefore, accurate classification of cancer subtypes can help doctors choose the most appropriate treatment options, improve treatment outcomes, and provide more accurate patient survival predictions. In this study, we propose a supervised multi-head attention mechanism model (SMA) to classify cancer subtypes successfully. The attention mechanism and feature sharing module of the SMA model can successfully learn the global and local feature information of multi-omics data. Second, it enriches the parameters of the model by deeply fusing multi-head attention encoders from Siamese through the fusion module. Validated by extensive experiments, the SMA model achieves the highest accuracy, F1 macroscopic, F1 weighted, and accurate classification of cancer subtypes in simulated, single-cell, and cancer multiomics datasets compared to AE, CNN, and GNN-based models. Therefore, we contribute to future research on multiomics data using our attention-based approach. |
1601.02189 | Ido Kanter | Amir Goldental, Pinhas Sabo, Shira Sardi, Roni Vardi and Ido Kanter | Mimicking Collective Firing Patterns of Hundreds of Connected Neurons
using a Single-Neuron Experiment | 26 pages and 6 figures,
http://journal.frontiersin.org/article/10.3389/fnins.2015.00508/ | Front. Neurosci. 9:508 (2015) | 10.3389/fnins.2015.00508 | null | q-bio.NC cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The experimental study of neural networks requires simultaneous measurements
of a massive number of neurons, while monitoring properties of the
connectivity, synaptic strengths and delays. Current technological barriers
make such a mission unachievable. In addition, as a result of the enormous
number of required measurements, the estimated network parameters would differ
from the original ones. Here we present a versatile experimental technique,
which enables the study of recurrent neural networks activity while being
capable of dictating the network connectivity and synaptic strengths. This
method is based on the observation that the response of neurons depends solely
on their recent stimulations, a short-term memory. It allows a long-term scheme
of stimulation and recording of a single neuron, to mimic simultaneous activity
measurements of neurons in a recurrent network. Utilization of this technique
demonstrates the spontaneous emergence of cooperative synchronous oscillations,
in particular the coexistence of fast Gamma and slow Delta oscillations, and
opens the horizon for the experimental study of other cooperative phenomena
within large-scale neural networks.
| [
{
"created": "Sun, 10 Jan 2016 09:07:09 GMT",
"version": "v1"
}
] | 2016-01-12 | [
[
"Goldental",
"Amir",
""
],
[
"Sabo",
"Pinhas",
""
],
[
"Sardi",
"Shira",
""
],
[
"Vardi",
"Roni",
""
],
[
"Kanter",
"Ido",
""
]
] | The experimental study of neural networks requires simultaneous measurements of a massive number of neurons, while monitoring properties of the connectivity, synaptic strengths and delays. Current technological barriers make such a mission unachievable. In addition, as a result of the enormous number of required measurements, the estimated network parameters would differ from the original ones. Here we present a versatile experimental technique, which enables the study of recurrent neural networks activity while being capable of dictating the network connectivity and synaptic strengths. This method is based on the observation that the response of neurons depends solely on their recent stimulations, a short-term memory. It allows a long-term scheme of stimulation and recording of a single neuron, to mimic simultaneous activity measurements of neurons in a recurrent network. Utilization of this technique demonstrates the spontaneous emergence of cooperative synchronous oscillations, in particular the coexistence of fast Gamma and slow Delta oscillations, and opens the horizon for the experimental study of other cooperative phenomena within large-scale neural networks. |
0704.2260 | Frederick Matsen IV | Frederick A. Matsen and Mike Steel | Phylogenetic mixtures on a single tree can mimic a tree of another
topology | null | null | null | null | q-bio.PE | null | Phylogenetic mixtures model the inhomogeneous molecular evolution commonly
observed in data. The performance of phylogenetic reconstruction methods where
the underlying data is generated by a mixture model has stimulated considerable
recent debate. Much of the controversy stems from simulations of mixture model
data on a given tree topology for which reconstruction algorithms output a tree
of a different topology; these findings were held up to show the shortcomings
of particular tree reconstruction methods. In so doing, the underlying
assumption was that mixture model data on one topology can be distinguished
from data evolved on an unmixed tree of another topology given enough data and
the ``correct'' method. Here we show that this assumption can be false. For
biologists our results imply that, for example, the combined data from two
genes whose phylogenetic trees differ only in terms of branch lengths can
perfectly fit a tree of a different topology.
| [
{
"created": "Wed, 18 Apr 2007 02:46:06 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Jun 2007 23:49:09 GMT",
"version": "v2"
},
{
"created": "Sat, 30 Jun 2007 17:36:20 GMT",
"version": "v3"
}
] | 2007-06-30 | [
[
"Matsen",
"Frederick A.",
""
],
[
"Steel",
"Mike",
""
]
] | Phylogenetic mixtures model the inhomogeneous molecular evolution commonly observed in data. The performance of phylogenetic reconstruction methods where the underlying data is generated by a mixture model has stimulated considerable recent debate. Much of the controversy stems from simulations of mixture model data on a given tree topology for which reconstruction algorithms output a tree of a different topology; these findings were held up to show the shortcomings of particular tree reconstruction methods. In so doing, the underlying assumption was that mixture model data on one topology can be distinguished from data evolved on an unmixed tree of another topology given enough data and the ``correct'' method. Here we show that this assumption can be false. For biologists our results imply that, for example, the combined data from two genes whose phylogenetic trees differ only in terms of branch lengths can perfectly fit a tree of a different topology. |
0711.0169 | Tobias Galla | Tobias Galla | Relative population size, co-operation pressure and strategy correlation
in two-population evolutionary dynamics | 9 pages, 10 figures | null | null | null | q-bio.PE | null | We study the coupled dynamics of two populations of random replicators by
means of statistical mechanics methods, and focus on the effects of relative
population size, strategy correlations and heterogeneities in the respective
co-operation pressures. To this end we generalise existing path-integral
approaches to replicator systems with random asymmetric couplings. This
technique allows one to formulate an effective dynamical theory, which is exact
in the thermodynamic limit and which can be solve for persistent order
parameters in a fixed-point regime regardless of the symmetry of the
interactions. The onset of instability can be determined self-consistently. We
calculate quantities such as the diversity of the respective populations and
their fitnesses in the stationary state, and compare results with data from a
numerical integration of the replicator equations
| [
{
"created": "Thu, 1 Nov 2007 17:40:54 GMT",
"version": "v1"
}
] | 2007-11-02 | [
[
"Galla",
"Tobias",
""
]
] | We study the coupled dynamics of two populations of random replicators by means of statistical mechanics methods, and focus on the effects of relative population size, strategy correlations and heterogeneities in the respective co-operation pressures. To this end we generalise existing path-integral approaches to replicator systems with random asymmetric couplings. This technique allows one to formulate an effective dynamical theory, which is exact in the thermodynamic limit and which can be solve for persistent order parameters in a fixed-point regime regardless of the symmetry of the interactions. The onset of instability can be determined self-consistently. We calculate quantities such as the diversity of the respective populations and their fitnesses in the stationary state, and compare results with data from a numerical integration of the replicator equations |
1711.03834 | Eve Armstrong | Eve Armstrong | Statistical data assimilation for estimating electrophysiology
simultaneously with connectivity within a biological neuronal network | 15 pages and 7 figures (without appendices). arXiv admin note: text
overlap with arXiv:1706.03296 | Phys. Rev. E 101, 012415 (2020) | 10.1103/PhysRevE.101.012415 | null | q-bio.NC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A method of data assimilation (DA) is employed to estimate
electrophysiological parameters of neurons simultaneously with their synaptic
connectivity in a small model biological network. The DA procedure is cast as
an optimization, with a cost function consisting of both a measurement error
and a model error term. An iterative reweighting of these terms permits a
systematic method to identify the lowest minimum, within a local region of
state space, on the surface of a non-convex cost function. In the model, two
sets of parameter values are associated with two particular functional modes of
network activity: simultaneous firing of all neurons, and a pattern-generating
mode wherein the neurons burst in sequence. The DA procedure is able to recover
these modes if: i) the stimulating electrical currents have chaotic waveforms,
and ii) the measurements consist of the membrane voltages of all neurons in the
circuit. Further, this method is able to prune a model of unnecessarily high
dimensionality to a representation that contains the maximum dimensionality
required to reproduce the provided measurements. This paper offers a
proof-of-concept that DA has the potential to inform laboratory designs for
estimating properties in small and isolatable functional circuits.
| [
{
"created": "Thu, 9 Nov 2017 18:33:03 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Nov 2017 23:59:07 GMT",
"version": "v2"
},
{
"created": "Mon, 4 Dec 2017 00:38:20 GMT",
"version": "v3"
},
{
"created": "Mon, 11 Dec 2017 20:44:51 GMT",
"version": "v4"
},
{
"created": "Tue, 4 Sep 2018 17:48:11 GMT",
"version": "v5"
},
{
"created": "Fri, 10 May 2019 19:05:50 GMT",
"version": "v6"
},
{
"created": "Fri, 22 Nov 2019 19:22:04 GMT",
"version": "v7"
},
{
"created": "Mon, 2 Dec 2019 23:23:14 GMT",
"version": "v8"
}
] | 2020-02-05 | [
[
"Armstrong",
"Eve",
""
]
] | A method of data assimilation (DA) is employed to estimate electrophysiological parameters of neurons simultaneously with their synaptic connectivity in a small model biological network. The DA procedure is cast as an optimization, with a cost function consisting of both a measurement error and a model error term. An iterative reweighting of these terms permits a systematic method to identify the lowest minimum, within a local region of state space, on the surface of a non-convex cost function. In the model, two sets of parameter values are associated with two particular functional modes of network activity: simultaneous firing of all neurons, and a pattern-generating mode wherein the neurons burst in sequence. The DA procedure is able to recover these modes if: i) the stimulating electrical currents have chaotic waveforms, and ii) the measurements consist of the membrane voltages of all neurons in the circuit. Further, this method is able to prune a model of unnecessarily high dimensionality to a representation that contains the maximum dimensionality required to reproduce the provided measurements. This paper offers a proof-of-concept that DA has the potential to inform laboratory designs for estimating properties in small and isolatable functional circuits. |
1310.8634 | Bj{\o}rn {\O}stman | Bj{\o}rn {\O}stman and Randall Lin and Christoph Adami | Trade-offs drive resource specialization and the gradual establishment
of ecotypes | 19 pages, 3 figures | BMC Evol. Biol. 14 (2014) 113 | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Speciation is driven by many different factors. Among those are trade-offs
between different ways an organism utilizes resources, and these trade-offs can
constrain the manner in which selection can optimize traits. Limited migration
among allopatric populations and species interactions can also drive
speciation, but here we ask if trade-offs alone are sufficient to drive
speciation in the absence of other factors. We present a model to study the
effects of trade-offs on specialization and adaptive radiation in asexual
organisms based solely on competition for limiting resources, where trade-offs
are stronger the greater an organism's ability to utilize resources. In this
model resources are perfectly substitutable, and fitness is derived from the
consumption of these resources. The model contains no spatial parameters, and
is therefore strictly sympatric. We quantify the degree of specialization by
the number of ecotypes formed and the niche breadth of the population, and
observe that these are sensitive to resource influx and trade-offs. Resource
influx has a strong effect on the degree of specialization, with a clear
transition between minimal diversification at high influx and multiple species
evolving at low resource influx. At low resource influx the degree of
specialization further depends on the strength of the trade-offs, with more
ecotypes evolving the stronger trade-offs are. The specialized organisms
persist through negative frequency-dependent selection. In addition, by
analyzing one of the evolutionary radiations in greater detail we demonstrate
that a single mutation alone is not enough to establish a new ecotype, even
though phylogenetic reconstruction identifies that mutation as the branching
point. Instead, it takes a series of additional mutations to ensure the stable
coexistence of the new ecotype in the background of the existing ones,
reminiscent of a recent observa
| [
{
"created": "Thu, 31 Oct 2013 18:46:47 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Mar 2014 21:09:13 GMT",
"version": "v2"
}
] | 2014-08-12 | [
[
"Østman",
"Bjørn",
""
],
[
"Lin",
"Randall",
""
],
[
"Adami",
"Christoph",
""
]
] | Speciation is driven by many different factors. Among those are trade-offs between different ways an organism utilizes resources, and these trade-offs can constrain the manner in which selection can optimize traits. Limited migration among allopatric populations and species interactions can also drive speciation, but here we ask if trade-offs alone are sufficient to drive speciation in the absence of other factors. We present a model to study the effects of trade-offs on specialization and adaptive radiation in asexual organisms based solely on competition for limiting resources, where trade-offs are stronger the greater an organism's ability to utilize resources. In this model resources are perfectly substitutable, and fitness is derived from the consumption of these resources. The model contains no spatial parameters, and is therefore strictly sympatric. We quantify the degree of specialization by the number of ecotypes formed and the niche breadth of the population, and observe that these are sensitive to resource influx and trade-offs. Resource influx has a strong effect on the degree of specialization, with a clear transition between minimal diversification at high influx and multiple species evolving at low resource influx. At low resource influx the degree of specialization further depends on the strength of the trade-offs, with more ecotypes evolving the stronger trade-offs are. The specialized organisms persist through negative frequency-dependent selection. In addition, by analyzing one of the evolutionary radiations in greater detail we demonstrate that a single mutation alone is not enough to establish a new ecotype, even though phylogenetic reconstruction identifies that mutation as the branching point. Instead, it takes a series of additional mutations to ensure the stable coexistence of the new ecotype in the background of the existing ones, reminiscent of a recent observa |
0908.4145 | Hiizu Nakanishi | Hiizu Nakanishi, Margit Pedersen, Anne K. Alsing, and Kim Sneppen | Modeling of the genetic switch of bacteriophage TP901-1: A heteromer of
CI and MOR ensures robust bistability | 12 pages, 9 figures with supplementary material | null | null | null | q-bio.MN q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The lytic-lysogenic switch of the temperate lactococcal phage TP901-1 is
fundamentally different from that of phage lambda. In phage TP901-1, the lytic
promoter PL is repressed by CI whereas repression of the lysogenic promoter PR
requires the presence of both of the antagonistic regulator proteins, MOR and
CI. We model the central part of the switch and compare the two cases for PR
repression: the one where the two regulators interact only on the DNA, and the
other where the two regulators form a heteromer complex in the cytoplasm prior
to DNA binding. The models are analyzed for bistability, and the predicted
promoter repression folds are compared to experimental data. We conclude that
the experimental data are best reproduced the latter case, where a heteromer
complex forms in solution. We further find that CI sequestration by the
formation of MOR:CI complexes in cytoplasm makes the genetic switch robust.
| [
{
"created": "Fri, 28 Aug 2009 11:04:41 GMT",
"version": "v1"
}
] | 2009-08-31 | [
[
"Nakanishi",
"Hiizu",
""
],
[
"Pedersen",
"Margit",
""
],
[
"Alsing",
"Anne K.",
""
],
[
"Sneppen",
"Kim",
""
]
] | The lytic-lysogenic switch of the temperate lactococcal phage TP901-1 is fundamentally different from that of phage lambda. In phage TP901-1, the lytic promoter PL is repressed by CI whereas repression of the lysogenic promoter PR requires the presence of both of the antagonistic regulator proteins, MOR and CI. We model the central part of the switch and compare the two cases for PR repression: the one where the two regulators interact only on the DNA, and the other where the two regulators form a heteromer complex in the cytoplasm prior to DNA binding. The models are analyzed for bistability, and the predicted promoter repression folds are compared to experimental data. We conclude that the experimental data are best reproduced the latter case, where a heteromer complex forms in solution. We further find that CI sequestration by the formation of MOR:CI complexes in cytoplasm makes the genetic switch robust. |
1911.00081 | Hao-Chih Lee | Hao-Chih Lee, Matteo Danieletto, Riccardo Miotto, Sarah T. Cherng and
Joel T. Dudley | Scaling structural learning with NO-BEARS to infer causal transcriptome
networks | Preprint of an article submitted for consideration in Pacific
Symposium on Biocomputing copyright 2019 World Scientific Publishing Co.,
Singapore, http://psb.stanford.edu/http://psb.stanford.edu/ | null | null | null | q-bio.GN cs.LG stat.ML | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Constructing gene regulatory networks is a critical step in revealing disease
mechanisms from transcriptomic data. In this work, we present NO-BEARS, a novel
algorithm for estimating gene regulatory networks. The NO-BEARS algorithm is
built on the basis of the NOTEARS algorithm with two improvements. First, we
propose a new constraint and its fast approximation to reduce the computational
cost of the NO-TEARS algorithm. Next, we introduce a polynomial regression loss
to handle non-linearity in gene expressions. Our implementation utilizes modern
GPU computation that can decrease the time of hours-long CPU computation to
seconds. Using synthetic data, we demonstrate improved performance, both in
processing time and accuracy, on inferring gene regulatory networks from gene
expression data.
| [
{
"created": "Thu, 31 Oct 2019 19:52:18 GMT",
"version": "v1"
}
] | 2019-11-04 | [
[
"Lee",
"Hao-Chih",
""
],
[
"Danieletto",
"Matteo",
""
],
[
"Miotto",
"Riccardo",
""
],
[
"Cherng",
"Sarah T.",
""
],
[
"Dudley",
"Joel T.",
""
]
] | Constructing gene regulatory networks is a critical step in revealing disease mechanisms from transcriptomic data. In this work, we present NO-BEARS, a novel algorithm for estimating gene regulatory networks. The NO-BEARS algorithm is built on the basis of the NOTEARS algorithm with two improvements. First, we propose a new constraint and its fast approximation to reduce the computational cost of the NO-TEARS algorithm. Next, we introduce a polynomial regression loss to handle non-linearity in gene expressions. Our implementation utilizes modern GPU computation that can decrease the time of hours-long CPU computation to seconds. Using synthetic data, we demonstrate improved performance, both in processing time and accuracy, on inferring gene regulatory networks from gene expression data. |
0903.1475 | Eugene Shakhnovich | Peiqiu Chen, Eugene I. Shakhnovich | Lethal Mutagenesis in Viruses and Bacteria | null | null | null | null | q-bio.BM q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Here we study how mutations which change physical properties of cell proteins
(stability) impact population survival and growth. In our model the genotype is
presented as a set of N numbers, folding free energies of cells N proteins.
Mutations occur upon replications so that stabilities of some proteins in
daughter cells differ from those in parent cell by random amounts drawn from
experimental distribution of mutational effects on protein stability. The
genotype-phenotype relationship posits that unstable proteins confer lethal
phenotype to a cell and in addition the cells fitness (duplication rate) is
proportional to the concentration of its folded proteins. Simulations reveal
that lethal mutagenesis occurs at mutation rates close to 7 mutations per
genome per replications for RNA viruses and about half of that for DNA based
organisms, in accord with earlier predictions from analytical theory and
experiment. This number appears somewhat dependent on the number of genes in
the organisms and natural death rate. Further, our model reproduces the
distribution of stabilities of natural proteins in excellent agreement with
experiment. Our model predicts that species with high mutation rates, tend to
have less stable proteins compared to species with low mutation rate.
| [
{
"created": "Mon, 9 Mar 2009 03:40:51 GMT",
"version": "v1"
}
] | 2009-03-10 | [
[
"Chen",
"Peiqiu",
""
],
[
"Shakhnovich",
"Eugene I.",
""
]
] | Here we study how mutations which change physical properties of cell proteins (stability) impact population survival and growth. In our model the genotype is presented as a set of N numbers, folding free energies of cells N proteins. Mutations occur upon replications so that stabilities of some proteins in daughter cells differ from those in parent cell by random amounts drawn from experimental distribution of mutational effects on protein stability. The genotype-phenotype relationship posits that unstable proteins confer lethal phenotype to a cell and in addition the cells fitness (duplication rate) is proportional to the concentration of its folded proteins. Simulations reveal that lethal mutagenesis occurs at mutation rates close to 7 mutations per genome per replications for RNA viruses and about half of that for DNA based organisms, in accord with earlier predictions from analytical theory and experiment. This number appears somewhat dependent on the number of genes in the organisms and natural death rate. Further, our model reproduces the distribution of stabilities of natural proteins in excellent agreement with experiment. Our model predicts that species with high mutation rates, tend to have less stable proteins compared to species with low mutation rate. |
1709.10043 | Oliver Sutton | Andrea Cangiani, Emmanuil H. Georgoulis, Andrew Yu. Morozov and Oliver
J. Sutton | Revealing new dynamical patterns in a reaction-diffusion model with
cyclic competition via a novel computational framework | 24 pages, 35 figures | null | 10.1098/rspa.2017.0608 | null | q-bio.PE math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding how patterns and travelling waves form in chemical and
biological reaction-diffusion models is an area which has been widely
researched, yet is still experiencing fast development. Surprisingly enough, we
still do not have a clear understanding about all possible types of dynamical
regimes in classical reaction-diffusion models such as Lotka-Volterra
competition models with spatial dependence. In this work, we demonstrate some
new types of wave propagation and pattern formation in a classical three
species cyclic competition model with spatial diffusion, which have been so far
missed in the literature. These new patterns are characterised by a high
regularity in space, but are different from patterns previously known to exist
in reaction-diffusion models, and may have important applications in improving
our understanding of biological pattern formation and invasion theory. Finding
these new patterns is made technically possible by using an automatic adaptive
finite element method driven by a novel a posteriori error estimate which is
proven to provide a reliable bound for the error of the numerical method. We
demonstrate how this numerical framework allows us to easily explore the
dynamical patterns both in two and three spatial dimensions.
| [
{
"created": "Thu, 28 Sep 2017 16:18:41 GMT",
"version": "v1"
}
] | 2018-05-24 | [
[
"Cangiani",
"Andrea",
""
],
[
"Georgoulis",
"Emmanuil H.",
""
],
[
"Morozov",
"Andrew Yu.",
""
],
[
"Sutton",
"Oliver J.",
""
]
] | Understanding how patterns and travelling waves form in chemical and biological reaction-diffusion models is an area which has been widely researched, yet is still experiencing fast development. Surprisingly enough, we still do not have a clear understanding about all possible types of dynamical regimes in classical reaction-diffusion models such as Lotka-Volterra competition models with spatial dependence. In this work, we demonstrate some new types of wave propagation and pattern formation in a classical three species cyclic competition model with spatial diffusion, which have been so far missed in the literature. These new patterns are characterised by a high regularity in space, but are different from patterns previously known to exist in reaction-diffusion models, and may have important applications in improving our understanding of biological pattern formation and invasion theory. Finding these new patterns is made technically possible by using an automatic adaptive finite element method driven by a novel a posteriori error estimate which is proven to provide a reliable bound for the error of the numerical method. We demonstrate how this numerical framework allows us to easily explore the dynamical patterns both in two and three spatial dimensions. |
q-bio/0404034 | Rukmini Kumar | Rukmini Kumar, Gilles Clermont, Yoram Vodovotz, Carson Chow | Dynamics of Acute Inflammation | 27 pages, 9 figures, Accepted by the Journal of Theoretical Biology | null | null | null | q-bio.TO | null | When the body is infected, it mounts an acute inflammatory response to rid
itself of the pathogens and restore health. Uncontrolled acute inflammation due
to infection is defined clinically as Sepsis and can culminate in organ failure
and death. We consider a three dimensional ordinary differential equation model
of inflammation consisting of a pathogen, and two inflammatory mediators. The
model reproduces the healthy outcome and diverse negative outcomes, depending
on initial conditions and parameters.when key parameters are changed and
suggest various therapeutic strategies. We suggest that the clinical condition
of sepsis can arise from several distinct physiological states, each of which
requires a different treatment approach. We analyze the various bifurcations
between the different outcomes
| [
{
"created": "Fri, 23 Apr 2004 22:18:39 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Kumar",
"Rukmini",
""
],
[
"Clermont",
"Gilles",
""
],
[
"Vodovotz",
"Yoram",
""
],
[
"Chow",
"Carson",
""
]
] | When the body is infected, it mounts an acute inflammatory response to rid itself of the pathogens and restore health. Uncontrolled acute inflammation due to infection is defined clinically as Sepsis and can culminate in organ failure and death. We consider a three dimensional ordinary differential equation model of inflammation consisting of a pathogen, and two inflammatory mediators. The model reproduces the healthy outcome and diverse negative outcomes, depending on initial conditions and parameters.when key parameters are changed and suggest various therapeutic strategies. We suggest that the clinical condition of sepsis can arise from several distinct physiological states, each of which requires a different treatment approach. We analyze the various bifurcations between the different outcomes |
1212.2555 | Mark Lipson | Mark Lipson, Po-Ru Loh, Alex Levin, David Reich, Nick Patterson,
Bonnie Berger | Efficient moment-based inference of admixture parameters and sources of
gene flow | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent explosion in available genetic data has led to significant
advances in understanding the demographic histories of and relationships among
human populations. It is still a challenge, however, to infer reliable
parameter values for complicated models involving many populations. Here we
present MixMapper, an efficient, interactive method for constructing
phylogenetic trees including admixture events using single nucleotide
polymorphism (SNP) genotype data. MixMapper implements a novel two-phase
approach to admixture inference using moment statistics, first building an
unadmixed scaffold tree and then adding admixed populations by solving systems
of equations that express allele frequency divergences in terms of mixture
parameters. Importantly, all features of the model, including topology, sources
of gene flow, branch lengths, and mixture proportions, are optimized
automatically from the data and include estimates of statistical uncertainty.
MixMapper also uses a new method to express branch lengths in easily
interpretable drift units. We apply MixMapper to recently published data for
HGDP individuals genotyped on a SNP array designed especially for use in
population genetics studies, obtaining confident results for 30 populations, 20
of them admixed. Notably, we confirm a signal of ancient admixture in European
populations---including previously undetected admixture in Sardinians and
Basques---involving a proportion of 20--40% ancient northern Eurasian ancestry.
| [
{
"created": "Tue, 11 Dec 2012 17:47:21 GMT",
"version": "v1"
},
{
"created": "Sun, 7 Apr 2013 22:06:54 GMT",
"version": "v2"
}
] | 2013-04-09 | [
[
"Lipson",
"Mark",
""
],
[
"Loh",
"Po-Ru",
""
],
[
"Levin",
"Alex",
""
],
[
"Reich",
"David",
""
],
[
"Patterson",
"Nick",
""
],
[
"Berger",
"Bonnie",
""
]
] | The recent explosion in available genetic data has led to significant advances in understanding the demographic histories of and relationships among human populations. It is still a challenge, however, to infer reliable parameter values for complicated models involving many populations. Here we present MixMapper, an efficient, interactive method for constructing phylogenetic trees including admixture events using single nucleotide polymorphism (SNP) genotype data. MixMapper implements a novel two-phase approach to admixture inference using moment statistics, first building an unadmixed scaffold tree and then adding admixed populations by solving systems of equations that express allele frequency divergences in terms of mixture parameters. Importantly, all features of the model, including topology, sources of gene flow, branch lengths, and mixture proportions, are optimized automatically from the data and include estimates of statistical uncertainty. MixMapper also uses a new method to express branch lengths in easily interpretable drift units. We apply MixMapper to recently published data for HGDP individuals genotyped on a SNP array designed especially for use in population genetics studies, obtaining confident results for 30 populations, 20 of them admixed. Notably, we confirm a signal of ancient admixture in European populations---including previously undetected admixture in Sardinians and Basques---involving a proportion of 20--40% ancient northern Eurasian ancestry. |
1508.04624 | Alexander K. Vidybida | Alexander K. Vidybida | Fast {\large\it Cl-}type inhibitory neuron with delayed feedback has
non-markov output statistics | 22 pages, 2 figures, 43 Refs. arXiv admin note: text overlap with
arXiv:1503.03312 | null | 10.30970/jps.22.4801 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For a class of fast {\it Cl-}type inhibitory spiking neuron models with
delayed feedback fed with a Poisson stochastic process of excitatory impulses,
it is proven that the stream of output interspike intervals cannot be presented
as a Markov process of any order.
| [
{
"created": "Wed, 19 Aug 2015 12:51:59 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Dec 2017 11:19:46 GMT",
"version": "v2"
}
] | 2021-11-12 | [
[
"Vidybida",
"Alexander K.",
""
]
] | For a class of fast {\it Cl-}type inhibitory spiking neuron models with delayed feedback fed with a Poisson stochastic process of excitatory impulses, it is proven that the stream of output interspike intervals cannot be presented as a Markov process of any order. |
2103.15518 | Ayan Das | Sabiha Majumder, Ayan Das, Appilineni Kushal, Sumithra Sankaran,
Vishwesha Guttal | Demographic noise can promote abrupt transitions in ecological systems | null | null | 10.1140/epjs/s11734-021-00184-z | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Strong positive feedback is considered a necessary condition to observe
abrupt shifts of ecosystems. A few previous studies have shown that demographic
noise -- arising from the probabilistic and discrete nature of birth and death
processes in finite systems -- makes the transitions gradual or continuous. In
this paper, we show that demographic noise may, in fact, promote abrupt
transitions in systems that would otherwise show continuous transitions. We
present our methods and results in a tutorial-like format. We begin with a
simple spatially-explicit individual-based model with local births and deaths
influenced by positive feedback processes. We then derive a stochastic
differential equation that describes how local probabilistic rules scale to
stochastic population dynamics. The infinite-size well-mixed limit of this SDE
for our model is consistent with mean-field models of abrupt regime-shifts.
Finally, we analytically show that as a consequence of demographic noise,
finite-size systems can undergo abrupt shifts even with weak positive
interactions. Numerical simulations of our spatially-explicit model confirm
this prediction. Thus, we predict that small-sized populations and ecosystems
may undergo abrupt collapse even when larger systems - with the same
microscopic interactions - show a smooth response to environmental stress.
| [
{
"created": "Mon, 29 Mar 2021 11:41:35 GMT",
"version": "v1"
}
] | 2021-06-22 | [
[
"Majumder",
"Sabiha",
""
],
[
"Das",
"Ayan",
""
],
[
"Kushal",
"Appilineni",
""
],
[
"Sankaran",
"Sumithra",
""
],
[
"Guttal",
"Vishwesha",
""
]
] | Strong positive feedback is considered a necessary condition to observe abrupt shifts of ecosystems. A few previous studies have shown that demographic noise -- arising from the probabilistic and discrete nature of birth and death processes in finite systems -- makes the transitions gradual or continuous. In this paper, we show that demographic noise may, in fact, promote abrupt transitions in systems that would otherwise show continuous transitions. We present our methods and results in a tutorial-like format. We begin with a simple spatially-explicit individual-based model with local births and deaths influenced by positive feedback processes. We then derive a stochastic differential equation that describes how local probabilistic rules scale to stochastic population dynamics. The infinite-size well-mixed limit of this SDE for our model is consistent with mean-field models of abrupt regime-shifts. Finally, we analytically show that as a consequence of demographic noise, finite-size systems can undergo abrupt shifts even with weak positive interactions. Numerical simulations of our spatially-explicit model confirm this prediction. Thus, we predict that small-sized populations and ecosystems may undergo abrupt collapse even when larger systems - with the same microscopic interactions - show a smooth response to environmental stress. |
1810.12779 | Irina Semenova | A.V. Belashov, A.A. Zhikhoreva, T.N. Belyaeva, E.S. Kornilova, A.V.
Salova, I.V. Semenova, O.S. Vasyutinskii | Quantitative assessment of changes in cellular morphology at
photodynamic treatment in vitro by means of digital holographic microscopy | 14 pages, 3 figures | Biomedical Optics Express 10 10 (2019) 4975-4986 | 10.1364/BOE.10.004975 | null | q-bio.CB physics.optics q-bio.QM q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Changes in morphological characteristics of cells from two cultured cancer
cell lines, HeLa and A549, induced by photodynamic treatment with Radachlorin
photosensitizer have been monitored using digital holographic microscopy. The
observed dose-dependent post-treatment dynamics of phase shift variations
demonstrated several scenarios of cell death. In particular the phase shift
increase at low doses can be associated with apoptosis while its decrease at
high doses can be associated with necrosis. Two cell types were shown to be
differently responsive to treatment at the same doses. Although the sequence of
death scenarios with increasing irradiation dose was demonstrated to be the
same, each specific scenario was realized at substantially different doses.
Results obtained by holographic microscopy were confirmed by confocal
fluorescence microscopy with the commonly used test assay.
| [
{
"created": "Tue, 30 Oct 2018 14:43:53 GMT",
"version": "v1"
}
] | 2020-01-23 | [
[
"Belashov",
"A. V.",
""
],
[
"Zhikhoreva",
"A. A.",
""
],
[
"Belyaeva",
"T. N.",
""
],
[
"Kornilova",
"E. S.",
""
],
[
"Salova",
"A. V.",
""
],
[
"Semenova",
"I. V.",
""
],
[
"Vasyutinskii",
"O. S.",
""
]
] | Changes in morphological characteristics of cells from two cultured cancer cell lines, HeLa and A549, induced by photodynamic treatment with Radachlorin photosensitizer have been monitored using digital holographic microscopy. The observed dose-dependent post-treatment dynamics of phase shift variations demonstrated several scenarios of cell death. In particular the phase shift increase at low doses can be associated with apoptosis while its decrease at high doses can be associated with necrosis. Two cell types were shown to be differently responsive to treatment at the same doses. Although the sequence of death scenarios with increasing irradiation dose was demonstrated to be the same, each specific scenario was realized at substantially different doses. Results obtained by holographic microscopy were confirmed by confocal fluorescence microscopy with the commonly used test assay. |
1910.01297 | Takuya Sato | Takuya U. Sato and Kunihiko Kaneko | Evolutionary dimension reduction in phenotypic space | Correct the subfigure numbers of Fig.4 and 5 | Phys. Rev. Research 2, 013197 (2020) | 10.1103/PhysRevResearch.2.013197 | null | q-bio.PE q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In general, cellular phenotypes, as measured by concentrations of cellular
components, involve large degrees of freedom. However, recent measurement has
demonstrated that phenotypic changes resulting from adaptation and evolution in
response to environmental changes are effectively restricted to a
low-dimensional subspace. Thus, uncovering the origin and nature of such a
drastic dimension reduction is crucial to understanding the general
characteristics of biological adaptation and evolution. Herein, we first
formulated the dimension reduction in terms of dynamical systems theory:
considering the steady growth state of cells, the reduction is represented by
the separation of a few large singular values of the inverse Jacobian matrix
around a fixed point. We then examined this dimension reduction by numerical
evolution of cells consisting of thousands of chemicals whose concentrations
determine phenotype. As a result of the evolution, phenotypic changes due to
mutations and external perturbations were found to be mainly restricted to a
one-dimensional subspace. One singular value of the inverse Jacobian matrix at
a fixed point of concentrations was significantly larger than the others. The
major phenotypic changes due to mutations and external perturbations occur
along the corresponding left-singular vector, which leads to phenotypic
constraint, and fitness dominantly changes in the same direction. Once such
phenotypic constraint is acquired, phenotypic evolution to a novel environment
takes advantage of this restricted phenotypic direction. This results in the
convergence of phenotypic pathways across genetically different strains, as is
experimentally observed, while accelerating further evolution.
| [
{
"created": "Thu, 3 Oct 2019 04:20:06 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Oct 2019 08:30:58 GMT",
"version": "v2"
}
] | 2020-03-04 | [
[
"Sato",
"Takuya U.",
""
],
[
"Kaneko",
"Kunihiko",
""
]
] | In general, cellular phenotypes, as measured by concentrations of cellular components, involve large degrees of freedom. However, recent measurement has demonstrated that phenotypic changes resulting from adaptation and evolution in response to environmental changes are effectively restricted to a low-dimensional subspace. Thus, uncovering the origin and nature of such a drastic dimension reduction is crucial to understanding the general characteristics of biological adaptation and evolution. Herein, we first formulated the dimension reduction in terms of dynamical systems theory: considering the steady growth state of cells, the reduction is represented by the separation of a few large singular values of the inverse Jacobian matrix around a fixed point. We then examined this dimension reduction by numerical evolution of cells consisting of thousands of chemicals whose concentrations determine phenotype. As a result of the evolution, phenotypic changes due to mutations and external perturbations were found to be mainly restricted to a one-dimensional subspace. One singular value of the inverse Jacobian matrix at a fixed point of concentrations was significantly larger than the others. The major phenotypic changes due to mutations and external perturbations occur along the corresponding left-singular vector, which leads to phenotypic constraint, and fitness dominantly changes in the same direction. Once such phenotypic constraint is acquired, phenotypic evolution to a novel environment takes advantage of this restricted phenotypic direction. This results in the convergence of phenotypic pathways across genetically different strains, as is experimentally observed, while accelerating further evolution. |
2209.01497 | Christopher Rohlfs | Chris Rohlfs | A descriptive analysis of olfactory sensation and memory in Drosophila
and its relation to artificial neural networks | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | This article provides a background and descriptive analysis of insect memory
and the coding of olfactory sensation in Drosophila, presenting graphs and
summary statistics from a large dataset of neurons and synapses that was
recently made publicly available and also discussing findings from the existing
empirical literature. Some general principles from Drosophila olfaction are
discussed as they apply to the design of analogous systems in artificial neural
networks: (1) the networks used for coding are shallow; (2) the level of
connectedness varies widely across neurons in the same layer; (3) much
communication is between neurons in the same layer; (4) in most olfactory
learning, the manner in which sensory inputs are represented in stored memory
is largely fixed, and the learning process involves developing positive or
negative associations with existing categories of inputs.
| [
{
"created": "Sat, 3 Sep 2022 20:40:23 GMT",
"version": "v1"
}
] | 2022-09-07 | [
[
"Rohlfs",
"Chris",
""
]
] | This article provides a background and descriptive analysis of insect memory and the coding of olfactory sensation in Drosophila, presenting graphs and summary statistics from a large dataset of neurons and synapses that was recently made publicly available and also discussing findings from the existing empirical literature. Some general principles from Drosophila olfaction are discussed as they apply to the design of analogous systems in artificial neural networks: (1) the networks used for coding are shallow; (2) the level of connectedness varies widely across neurons in the same layer; (3) much communication is between neurons in the same layer; (4) in most olfactory learning, the manner in which sensory inputs are represented in stored memory is largely fixed, and the learning process involves developing positive or negative associations with existing categories of inputs. |
1310.2264 | Stephen D. H. Hsu | Shashaank Vattikuti, James J. Lee, Christopher C. Chang, Stephen D. H.
Hsu, Carson C. Chow | Application of compressed sensing to genome wide association studies and
genomic selection | 30 pages, 11 figures. Version to appear in journal GigaScience | null | null | null | q-bio.GN stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show that the signal-processing paradigm known as compressed sensing (CS)
is applicable to genome-wide association studies (GWAS) and genomic selection
(GS). The aim of GWAS is to isolate trait-associated loci, whereas GS attempts
to predict the phenotypic values of new individuals on the basis of training
data. CS addresses a problem common to both endeavors, namely that the number
of genotyped markers often greatly exceeds the sample size. We show using CS
methods and theory that all loci of nonzero effect can be identified (selected)
using an efficient algorithm, provided that they are sufficiently few in number
(sparse) relative to sample size. For heritability h2 = 1, there is a sharp
phase transition to complete selection as the sample size is increased. For
heritability values less than one, complete selection can still occur although
the transition is smoothed. The transition boundary is only weakly dependent on
the total number of genotyped markers. The crossing of a transition boundary
provides an objective means to determine when true effects are being recovered;
we discuss practical methods for detecting the boundary. For h2 = 0.5, we find
that a sample size that is thirty times the number of nonzero loci is
sufficient for good recovery.
| [
{
"created": "Tue, 8 Oct 2013 20:16:27 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Jan 2014 01:47:46 GMT",
"version": "v2"
},
{
"created": "Sun, 11 May 2014 21:49:48 GMT",
"version": "v3"
}
] | 2014-05-13 | [
[
"Vattikuti",
"Shashaank",
""
],
[
"Lee",
"James J.",
""
],
[
"Chang",
"Christopher C.",
""
],
[
"Hsu",
"Stephen D. H.",
""
],
[
"Chow",
"Carson C.",
""
]
] | We show that the signal-processing paradigm known as compressed sensing (CS) is applicable to genome-wide association studies (GWAS) and genomic selection (GS). The aim of GWAS is to isolate trait-associated loci, whereas GS attempts to predict the phenotypic values of new individuals on the basis of training data. CS addresses a problem common to both endeavors, namely that the number of genotyped markers often greatly exceeds the sample size. We show using CS methods and theory that all loci of nonzero effect can be identified (selected) using an efficient algorithm, provided that they are sufficiently few in number (sparse) relative to sample size. For heritability h2 = 1, there is a sharp phase transition to complete selection as the sample size is increased. For heritability values less than one, complete selection can still occur although the transition is smoothed. The transition boundary is only weakly dependent on the total number of genotyped markers. The crossing of a transition boundary provides an objective means to determine when true effects are being recovered; we discuss practical methods for detecting the boundary. For h2 = 0.5, we find that a sample size that is thirty times the number of nonzero loci is sufficient for good recovery. |
1410.0608 | Saba Emrani | Saba Emrani and Hamid Krim | Robust Detection of Periodic Patterns in Gene Expression Microarray Data
using Topological Signal Analysis | 4 pages, 5 figures | null | null | null | q-bio.QM math.AT q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a new approach for analyzing gene expression data
that builds on topological characteristics of time series. Our goal is to
identify cell cycle regulated genes in micro array dataset. We construct a
point cloud out of time series using delay coordinate embeddings. Persistent
homology is utilized to analyse the topology of the point cloud for detection
of periodicity. This novel technique is accurate and robust to noise, missing
data points and varying sampling intervals. Our experiments using Yeast
Saccharomyces cerevisiae dataset substantiate the capabilities of the proposed
method.
| [
{
"created": "Thu, 2 Oct 2014 17:10:44 GMT",
"version": "v1"
}
] | 2014-10-03 | [
[
"Emrani",
"Saba",
""
],
[
"Krim",
"Hamid",
""
]
] | In this paper, we present a new approach for analyzing gene expression data that builds on topological characteristics of time series. Our goal is to identify cell cycle regulated genes in micro array dataset. We construct a point cloud out of time series using delay coordinate embeddings. Persistent homology is utilized to analyse the topology of the point cloud for detection of periodicity. This novel technique is accurate and robust to noise, missing data points and varying sampling intervals. Our experiments using Yeast Saccharomyces cerevisiae dataset substantiate the capabilities of the proposed method. |
0902.1881 | Indrani Bose | Subhasis Banerjee and Indrani Bose | Functional characteristics of a double positive feedback loop coupled
with autorepression | 9 pages 14 figures | Phys. Biol. 5 (2008) 046008 (9pp) | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the functional characteristics of a two-gene motif consisting of a
double positive feedback loop and an autoregulatory negative feedback loop. The
motif appears in the gene regulatory network controlling the functional
activity of pancreatic $\beta$-cells. The model exhibits bistability and
hysteresis in appropriate parameter regions. The two stable steady states
correspond to low (OFF state) and high (ON state) protein levels respectively.
Using a deterministic approach, we show that the region of bistability
increases in extent when the copy number of one of the genes is reduced from
two to one. The negative feedback loop has the effect of reducing the size of
the bistable region. Loss of a gene copy, brought about by mutations, hampers
the normal functioning of the $\beta$-cells giving rise to the genetic
disorder, maturity-onset diabetes of the young (MODY). The diabetic phenotype
makes its appearance when a sizable fraction of the $\beta$-cells is in the OFF
state. Using stochastic simulation techniques, we show that, on reduction of
the gene copy number, there is a transition from the monostable ON to the ON
state in the bistable region of the parameter space. Fluctuations in the
protein levels, arising due to the stochastic nature of gene expression, can
give rise to transitions between the ON and OFF states. We show that as the
strength of autorepression increases, the ON$\to$OFF state transitions become
less probable whereas the reverse transitions are more probable. The
implications of the results in the context of the occurrence of MODY are
pointed out..
| [
{
"created": "Wed, 11 Feb 2009 13:03:03 GMT",
"version": "v1"
}
] | 2009-02-12 | [
[
"Banerjee",
"Subhasis",
""
],
[
"Bose",
"Indrani",
""
]
] | We study the functional characteristics of a two-gene motif consisting of a double positive feedback loop and an autoregulatory negative feedback loop. The motif appears in the gene regulatory network controlling the functional activity of pancreatic $\beta$-cells. The model exhibits bistability and hysteresis in appropriate parameter regions. The two stable steady states correspond to low (OFF state) and high (ON state) protein levels respectively. Using a deterministic approach, we show that the region of bistability increases in extent when the copy number of one of the genes is reduced from two to one. The negative feedback loop has the effect of reducing the size of the bistable region. Loss of a gene copy, brought about by mutations, hampers the normal functioning of the $\beta$-cells giving rise to the genetic disorder, maturity-onset diabetes of the young (MODY). The diabetic phenotype makes its appearance when a sizable fraction of the $\beta$-cells is in the OFF state. Using stochastic simulation techniques, we show that, on reduction of the gene copy number, there is a transition from the monostable ON to the ON state in the bistable region of the parameter space. Fluctuations in the protein levels, arising due to the stochastic nature of gene expression, can give rise to transitions between the ON and OFF states. We show that as the strength of autorepression increases, the ON$\to$OFF state transitions become less probable whereas the reverse transitions are more probable. The implications of the results in the context of the occurrence of MODY are pointed out.. |
2011.01795 | Chaoqing Xu | Chaoqing Xu, Guodao Sun, Ronghua Liang, and Xiufang Xu | Vector Field Streamline Clustering Framework for Brain Fiber Tract
Segmentation | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Brain fiber tracts are widely used in studying brain diseases, which may lead
to a better understanding of how disease affects the brain. The segmentation of
brain fiber tracts assumed enormous importance in disease analysis. In this
paper, we propose a novel vector field streamline clustering framework for
brain fiber tract segmentations. Brain fiber tracts are firstly expressed in a
vector field and compressed using the streamline simplification algorithm.
After streamline normalization and regular-polyhedron projection,
high-dimensional features of each fiber tract are computed and fed to the IDEC
clustering algorithm. We also provide qualitative and quantitative evaluations
of the IDEC clustering method and QB clustering method. Our clustering results
of the brain fiber tracts help researchers gain perception of the brain
structure. This work has the potential to automatically create a robust fiber
bundle template that can effectively segment brain fiber tracts while enabling
consistent anatomical tract identification.
| [
{
"created": "Tue, 3 Nov 2020 15:40:13 GMT",
"version": "v1"
}
] | 2020-11-04 | [
[
"Xu",
"Chaoqing",
""
],
[
"Sun",
"Guodao",
""
],
[
"Liang",
"Ronghua",
""
],
[
"Xu",
"Xiufang",
""
]
] | Brain fiber tracts are widely used in studying brain diseases, which may lead to a better understanding of how disease affects the brain. The segmentation of brain fiber tracts assumed enormous importance in disease analysis. In this paper, we propose a novel vector field streamline clustering framework for brain fiber tract segmentations. Brain fiber tracts are firstly expressed in a vector field and compressed using the streamline simplification algorithm. After streamline normalization and regular-polyhedron projection, high-dimensional features of each fiber tract are computed and fed to the IDEC clustering algorithm. We also provide qualitative and quantitative evaluations of the IDEC clustering method and QB clustering method. Our clustering results of the brain fiber tracts help researchers gain perception of the brain structure. This work has the potential to automatically create a robust fiber bundle template that can effectively segment brain fiber tracts while enabling consistent anatomical tract identification. |
1209.3029 | Graham Coop | Torsten G\"unther and Graham Coop | Robust identification of local adaptation from allele frequencies | 27 pages, 7 figures | null | null | null | q-bio.PE stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Comparing allele frequencies among populations that differ in environment has
long been a tool for detecting loci involved in local adaptation. However, such
analyses are complicated by an imperfect knowledge of population allele
frequencies and neutral correlations of allele frequencies among populations
due to shared population history and gene flow. Here we develop a set of
methods to robustly test for unusual allele frequency patterns, and
correlations between environmental variables and allele frequencies while
accounting for these complications based on a Bayesian model previously
implemented in the software Bayenv. Using this model, we calculate a set of
`standardized allele frequencies' that allows investigators to apply tests of
their choice to multiple populations, while accounting for sampling and
covariance due to population history. We illustrate this first by showing that
these standardized frequencies can be used to calculate powerful tests to
detect non-parametric correlations with environmental variables, which are also
less prone to spurious results due to outlier populations. We then demonstrate
how these standardized allele frequencies can be used to construct a test to
detect SNPs that deviate strongly from neutral population structure. This test
is conceptually related to FST but should be more powerful as we account for
population history. We also extend the model to next-generation sequencing of
population pools, which is a cost-efficient way to estimate population allele
frequencies, but it implies an additional level of sampling noise. The utility
of these methods is demonstrated in simulations and by re-analyzing human SNP
data from the HGDP populations. An implementation of our method will be
available from http://gcbias.org.
| [
{
"created": "Thu, 13 Sep 2012 20:27:09 GMT",
"version": "v1"
}
] | 2012-09-17 | [
[
"Günther",
"Torsten",
""
],
[
"Coop",
"Graham",
""
]
] | Comparing allele frequencies among populations that differ in environment has long been a tool for detecting loci involved in local adaptation. However, such analyses are complicated by an imperfect knowledge of population allele frequencies and neutral correlations of allele frequencies among populations due to shared population history and gene flow. Here we develop a set of methods to robustly test for unusual allele frequency patterns, and correlations between environmental variables and allele frequencies while accounting for these complications based on a Bayesian model previously implemented in the software Bayenv. Using this model, we calculate a set of `standardized allele frequencies' that allows investigators to apply tests of their choice to multiple populations, while accounting for sampling and covariance due to population history. We illustrate this first by showing that these standardized frequencies can be used to calculate powerful tests to detect non-parametric correlations with environmental variables, which are also less prone to spurious results due to outlier populations. We then demonstrate how these standardized allele frequencies can be used to construct a test to detect SNPs that deviate strongly from neutral population structure. This test is conceptually related to FST but should be more powerful as we account for population history. We also extend the model to next-generation sequencing of population pools, which is a cost-efficient way to estimate population allele frequencies, but it implies an additional level of sampling noise. The utility of these methods is demonstrated in simulations and by re-analyzing human SNP data from the HGDP populations. An implementation of our method will be available from http://gcbias.org. |
1310.8592 | Andrea Rocco | Andrea Rocco, Andrzej M. Kierzek, Johnjoe McFadden | Slow protein fluctuations explain the emergence of growth phenotypes and
persistence in clonal bacterial populations | 26 pages, 7 figures | PLOS ONE 8(1), e54272 (2013) | 10.1371/journal.pone.0054272 | null | q-bio.SC q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most challenging problems in microbiology is to understand how a
small fraction of microbes that resists killing by antibiotics can emerge in a
population of genetically identical cells, the phenomenon known as persistence
or drug tolerance. Its characteristic signature is the biphasic kill curve,
whereby microbes exposed to a bactericidal agent are initially killed very
rapidly but then much more slowly. Here we relate this problem to the more
general problem of understanding the emergence of distinct growth phenotypes in
clonal populations. We address the problem mathematically by adopting the
framework of the phenomenon of so-called weak ergodicity breaking, well known
in dynamical physical systems, which we extend to the biological context. We
show analytically and by direct stochastic simulations that distinct growth
phenotypes can emerge as a consequence of slow-down of stochastic fluctuations
in the expression of a gene controlling growth rate. In the regime of fast gene
transcription, the system is ergodic, the growth rate distribution is unimodal,
and accounts for one phenotype only. In contrast, at slow transcription and
fast translation, weakly non-ergodic components emerge, the population
distribution of growth rates becomes bimodal, and two distinct growth
phenotypes are identified. When coupled to the well-established growth rate
dependence of antibiotic killing, this model describes the observed fast and
slow killing phases, and reproduces much of the phenomenology of bacterial
persistence. The model has major implications for efforts to develop control
strategies for persistent infections.
| [
{
"created": "Thu, 31 Oct 2013 17:07:08 GMT",
"version": "v1"
}
] | 2015-06-17 | [
[
"Rocco",
"Andrea",
""
],
[
"Kierzek",
"Andrzej M.",
""
],
[
"McFadden",
"Johnjoe",
""
]
] | One of the most challenging problems in microbiology is to understand how a small fraction of microbes that resists killing by antibiotics can emerge in a population of genetically identical cells, the phenomenon known as persistence or drug tolerance. Its characteristic signature is the biphasic kill curve, whereby microbes exposed to a bactericidal agent are initially killed very rapidly but then much more slowly. Here we relate this problem to the more general problem of understanding the emergence of distinct growth phenotypes in clonal populations. We address the problem mathematically by adopting the framework of the phenomenon of so-called weak ergodicity breaking, well known in dynamical physical systems, which we extend to the biological context. We show analytically and by direct stochastic simulations that distinct growth phenotypes can emerge as a consequence of slow-down of stochastic fluctuations in the expression of a gene controlling growth rate. In the regime of fast gene transcription, the system is ergodic, the growth rate distribution is unimodal, and accounts for one phenotype only. In contrast, at slow transcription and fast translation, weakly non-ergodic components emerge, the population distribution of growth rates becomes bimodal, and two distinct growth phenotypes are identified. When coupled to the well-established growth rate dependence of antibiotic killing, this model describes the observed fast and slow killing phases, and reproduces much of the phenomenology of bacterial persistence. The model has major implications for efforts to develop control strategies for persistent infections. |
q-bio/0611041 | Michel Yamagishi | Michel E. Beleza Yamagishi and Alex Itiro Shimabukuro | Nucleotide Frequencies in Human Genome and Fibonacci Numbers | 12 pages, 2 figures | Bulletin of Mathematical Biology, 70, 643-653,2008 | 10.1007/s11538-007-9261-6 | null | q-bio.OT | null | This work presents a mathematical model that establishes an interesting
connection between nucleotide frequencies in human single-stranded DNA and the
famous Fibonacci's numbers. The model relies on two assumptions. First,
Chargaff's second parity rule should be valid, and, second, the nucleotide
frequencies should approach limit values when the number of bases is
sufficiently large. Under these two hypotheses, it is possible to predict the
human nucleotide frequencies with accuracy. It is noteworthy, that the
predicted values are solutions of an optimization problem, which is commonplace
in many nature's phenomena.
| [
{
"created": "Mon, 13 Nov 2006 12:24:20 GMT",
"version": "v1"
}
] | 2008-03-19 | [
[
"Yamagishi",
"Michel E. Beleza",
""
],
[
"Shimabukuro",
"Alex Itiro",
""
]
] | This work presents a mathematical model that establishes an interesting connection between nucleotide frequencies in human single-stranded DNA and the famous Fibonacci's numbers. The model relies on two assumptions. First, Chargaff's second parity rule should be valid, and, second, the nucleotide frequencies should approach limit values when the number of bases is sufficiently large. Under these two hypotheses, it is possible to predict the human nucleotide frequencies with accuracy. It is noteworthy, that the predicted values are solutions of an optimization problem, which is commonplace in many nature's phenomena. |
1512.04590 | Jeffrey West | Jeffrey West, Zaki Hasnain, Paul Macklin, Paul K. Newton | An evolutionary model of tumor cell kinetics and the emergence of
molecular heterogeneity driving Gompertzian growth | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A cell-molecular based evolutionary model of tumor development driven by a
stochastic Moran birth-death process is developed, where each cell carries
molecular information represented by a four-digit binary string, used to
differentiate cells into 16 molecular types. The binary string value determines
cell fitness, with lower fit cells (e.g. 0000) defined as healthy phenotypes,
and higher fit cells (e.g. 1111) defined as malignant phenotypes. At each step
of the birth-death process, the two phenotypic sub-populations compete in a
prisoner's dilemma evolutionary game with healthy cells (cooperators) competing
with cancer cells (defectors). Fitness and birth-death rates are defined via
the prisoner's dilemma payoff matrix. Cells are able undergo two types of
stochastic point mutations passed to the daughter cell's binary string during
birth: passenger mutations (conferring no fitness advantage) and driver
mutations (increasing cell fitness). Dynamic phylogenetic trees show clonal
expansions of cancer cell sub-populations from an initial malignant cell. The
tumor growth equation states that the growth rate is proportional to the
logarithm of cellular heterogeneity, here measured using the Shannon entropy of
the distribution of binary sequences in the tumor cell population. Nonconstant
tumor growth rates, (exponential growth during sub-clinical range of the tumor
and subsequent slowed growth during tumor saturation) are associated with a
Gompertzian growth curve, an emergent feature of the model explained here using
simple statistical mechanics principles related to the degree of functional
coupling of the cell states. Dosing strategies at early stage development,
mid-stage (clinical stage), and late stage development of the tumor are
compared, showing therapy is most effective during the sub-clinical stage,
before the cancer subpopulation is selected for growth.
| [
{
"created": "Mon, 14 Dec 2015 22:33:15 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Dec 2015 06:26:23 GMT",
"version": "v2"
},
{
"created": "Tue, 5 Jan 2016 00:03:32 GMT",
"version": "v3"
},
{
"created": "Mon, 8 Feb 2016 19:15:54 GMT",
"version": "v4"
}
] | 2016-02-09 | [
[
"West",
"Jeffrey",
""
],
[
"Hasnain",
"Zaki",
""
],
[
"Macklin",
"Paul",
""
],
[
"Newton",
"Paul K.",
""
]
] | A cell-molecular based evolutionary model of tumor development driven by a stochastic Moran birth-death process is developed, where each cell carries molecular information represented by a four-digit binary string, used to differentiate cells into 16 molecular types. The binary string value determines cell fitness, with lower fit cells (e.g. 0000) defined as healthy phenotypes, and higher fit cells (e.g. 1111) defined as malignant phenotypes. At each step of the birth-death process, the two phenotypic sub-populations compete in a prisoner's dilemma evolutionary game with healthy cells (cooperators) competing with cancer cells (defectors). Fitness and birth-death rates are defined via the prisoner's dilemma payoff matrix. Cells are able undergo two types of stochastic point mutations passed to the daughter cell's binary string during birth: passenger mutations (conferring no fitness advantage) and driver mutations (increasing cell fitness). Dynamic phylogenetic trees show clonal expansions of cancer cell sub-populations from an initial malignant cell. The tumor growth equation states that the growth rate is proportional to the logarithm of cellular heterogeneity, here measured using the Shannon entropy of the distribution of binary sequences in the tumor cell population. Nonconstant tumor growth rates, (exponential growth during sub-clinical range of the tumor and subsequent slowed growth during tumor saturation) are associated with a Gompertzian growth curve, an emergent feature of the model explained here using simple statistical mechanics principles related to the degree of functional coupling of the cell states. Dosing strategies at early stage development, mid-stage (clinical stage), and late stage development of the tumor are compared, showing therapy is most effective during the sub-clinical stage, before the cancer subpopulation is selected for growth. |
2112.06613 | Paul Richter | Paul Richter | Large-scale GPU-based network analysis of the human T-cell receptor
repertoire | 15 pages, 7 figures, preprint for a scientific article | null | null | null | q-bio.MN physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the structure of the human T-cell receptor repertoire is a
crucial precondition to understand the ability of the immune system to
recognize and respond to antigens. T-cells are often compared via the
complementarity determining region 3 (CDR3) of their respective T-cell receptor
beta chains. Nevertheless, previous studies often simply compared if CDR3beta
sequences were equal, while network theory studies were usually limited to
several thousand sequences due to the high computational effort of constructing
the network. To overcome that hurdle, we introduce the GPU-based algorithm
TCR-NET to construct large-scale CDR3beta similarity networks using
model-generated and empirical sequence data with up to 800,000 CDR3beta
sequences on a normal computer for the first time. Using network analysis
methods we study the structural properties of these networks and conclude that
(i) the fraction of public TCRs depends on the size of the TCR repertoire,
along with the exact (not unified) definition of "public" sequences, (ii) the
TCR network is assortative with the average neighbor degree being proportional
to the squareroot of the degree of a node and (iii) the repertoire is robust
against losses of TCRs. Moreover, we analyze the networks of antigen-specific
TCRs for different antigen families and find differing clustering coefficients
and assortativities. TCR-NET offers better access to assess large-scale TCR
repertoire networks, opening the possibility to quantify their structure and
quantitatively distinguish their ability to react to antigens, which we
anticipate to become a useful tool in a time of increasingly large amounts of
repertoire sequencing data becoming available.
| [
{
"created": "Mon, 13 Dec 2021 12:49:15 GMT",
"version": "v1"
}
] | 2021-12-14 | [
[
"Richter",
"Paul",
""
]
] | Understanding the structure of the human T-cell receptor repertoire is a crucial precondition to understand the ability of the immune system to recognize and respond to antigens. T-cells are often compared via the complementarity determining region 3 (CDR3) of their respective T-cell receptor beta chains. Nevertheless, previous studies often simply compared if CDR3beta sequences were equal, while network theory studies were usually limited to several thousand sequences due to the high computational effort of constructing the network. To overcome that hurdle, we introduce the GPU-based algorithm TCR-NET to construct large-scale CDR3beta similarity networks using model-generated and empirical sequence data with up to 800,000 CDR3beta sequences on a normal computer for the first time. Using network analysis methods we study the structural properties of these networks and conclude that (i) the fraction of public TCRs depends on the size of the TCR repertoire, along with the exact (not unified) definition of "public" sequences, (ii) the TCR network is assortative with the average neighbor degree being proportional to the squareroot of the degree of a node and (iii) the repertoire is robust against losses of TCRs. Moreover, we analyze the networks of antigen-specific TCRs for different antigen families and find differing clustering coefficients and assortativities. TCR-NET offers better access to assess large-scale TCR repertoire networks, opening the possibility to quantify their structure and quantitatively distinguish their ability to react to antigens, which we anticipate to become a useful tool in a time of increasingly large amounts of repertoire sequencing data becoming available. |
2207.10571 | Alexandre Benatti | Alexandre Benatti, Henrique F. de Arruda, Luciano da F. Costa | Neuromorphic Networks as Revealed by Features Similarity | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The study of neuronal morphology is important not only for its potential
relationship with neuronal dynamics, but also as a means to classify diverse
types of cells and compare than among species, organs, and conditions. In the
present work, we approach this interesting problem by using the concept of
coincidence similarity, as well as a respectively derived method for mapping
datasets into networks. The coincidence similarity has been found to allow some
specific interesting properties which have allowed enhanced performance
(selectivity and sensitivity) concerning several pattern recognition tasks.
Several combinations of 20 morphological features were considered, and the
respective networks were obtained by maximizing the literal modularity (in
supervised manner) respectively to the involved parameters. Well-separated
groups were obtained that provide a rich representation of the main similarity
interrelationships between the 735 considered neuronal cells. A sequence of
network configurations illustrating the progressive merging between cells and
groups was also obtained by varying one of the coincidence parameters.
| [
{
"created": "Mon, 13 Jun 2022 17:12:42 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Mar 2023 16:59:55 GMT",
"version": "v2"
},
{
"created": "Fri, 8 Mar 2024 13:03:23 GMT",
"version": "v3"
}
] | 2024-03-11 | [
[
"Benatti",
"Alexandre",
""
],
[
"de Arruda",
"Henrique F.",
""
],
[
"Costa",
"Luciano da F.",
""
]
] | The study of neuronal morphology is important not only for its potential relationship with neuronal dynamics, but also as a means to classify diverse types of cells and compare than among species, organs, and conditions. In the present work, we approach this interesting problem by using the concept of coincidence similarity, as well as a respectively derived method for mapping datasets into networks. The coincidence similarity has been found to allow some specific interesting properties which have allowed enhanced performance (selectivity and sensitivity) concerning several pattern recognition tasks. Several combinations of 20 morphological features were considered, and the respective networks were obtained by maximizing the literal modularity (in supervised manner) respectively to the involved parameters. Well-separated groups were obtained that provide a rich representation of the main similarity interrelationships between the 735 considered neuronal cells. A sequence of network configurations illustrating the progressive merging between cells and groups was also obtained by varying one of the coincidence parameters. |
1610.02309 | Henning Dickten | Henning Dickten, Christian E. Elger, Klaus Lehnertz | Measuring directed interactions using cellular neural networks with
complex connection topologies | null | R. Tetzlaff and C. E. Elger and K. Lehnertz (2013), Recent
Advances in Predicting and Preventing Epileptic Seizures, page 242-252,
Singapore, World Scientific. ISBN: 978-981-4525-34-3 | null | null | q-bio.NC nlin.CD nlin.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We advance our approach of analyzing the dynamics of interacting complex
systems with the nonlinear dynamics of interacting nonlinear elements. We
replace the widely used lattice-like connection topology of cellular neural
networks (CNN) by complex topologies that include both short- and long-ranged
connections. With an exemplary time-resolved analysis of asymmetric nonlinear
interdependences between the seizure generating area and its immediate
surrounding we provide first evidence for complex CNN connection topologies to
allow for a faster network optimization together with an improved approximation
accuracy of directed interactions.
| [
{
"created": "Thu, 6 Oct 2016 14:33:17 GMT",
"version": "v1"
}
] | 2016-10-10 | [
[
"Dickten",
"Henning",
""
],
[
"Elger",
"Christian E.",
""
],
[
"Lehnertz",
"Klaus",
""
]
] | We advance our approach of analyzing the dynamics of interacting complex systems with the nonlinear dynamics of interacting nonlinear elements. We replace the widely used lattice-like connection topology of cellular neural networks (CNN) by complex topologies that include both short- and long-ranged connections. With an exemplary time-resolved analysis of asymmetric nonlinear interdependences between the seizure generating area and its immediate surrounding we provide first evidence for complex CNN connection topologies to allow for a faster network optimization together with an improved approximation accuracy of directed interactions. |
2108.13764 | Ibrahim Muhammad I. A. M | Ibrahim Mohammed | Virtual screening of Microalgal compounds as potential inhibitors of
Type 2 Human Transmembrane serine protease (TMPRSS2) | null | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | More than 198 million cases of severe acute respiratory syndrome coronavirus
2 (SARS-CoV-2) has been reported that result in no fewer than 4.2 million
deaths globally. The rapid spread of the disease coupled with the lack of
specific registered drugs for its treatment pose a great challenge that
necessitate the development of therapeutic agents from a variety of sources. In
this study, we employed an in-silico method to screen natural compounds with a
view to identify inhibitors of the human transmembrane protease serine type 2
(TMPRSS2). The activity of this enzyme is essential for viral access into the
host cells via angiotensin-converting enzyme 2 (ACE-2). Inhibiting the activity
of this enzyme is therefore highly crucial for preventing viral fusion with
ACE-2 thus shielding SARS-CoV-2 infectivity. 3D model of TMPRSS2 was
constructed using I-TASSER, refined by GalaxyRefine, validated by Ramachandran
plot server and overall model quality was checked by ProSA. 95 natural
compounds from microalgae were virtually screened against the modeled protein
that led to the identification 17 best leads capable of binding to TMPRSS2 with
a good binding score comparable, greater or a bit lower than that of the
standard inhibitor (camostat). Physicochemical properties, ADME (absorption,
distribution, metabolism, excretion) and toxicity analysis revealed top 4
compounds including the reference drug with good pharmacokinetic and
pharmacodynamic profiles. These compounds bind to the same pocket of the
protein with a binding energy of -7.8 kcal/mol, -7.6 kcal/mol, -7.4 kcal/mol
and -7.4 kcal/mol each for camostat, apigenin, catechin and epicatechin
respectively. This study shed light on the potential of microalgal compounds
against SARS-CoV-2. In vivo and invitro studies are required to developed
SARS-CoV-2 drugs based on the structures of the compounds identified in this
study.
| [
{
"created": "Tue, 31 Aug 2021 11:27:42 GMT",
"version": "v1"
}
] | 2021-09-01 | [
[
"Mohammed",
"Ibrahim",
""
]
] | More than 198 million cases of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has been reported that result in no fewer than 4.2 million deaths globally. The rapid spread of the disease coupled with the lack of specific registered drugs for its treatment pose a great challenge that necessitate the development of therapeutic agents from a variety of sources. In this study, we employed an in-silico method to screen natural compounds with a view to identify inhibitors of the human transmembrane protease serine type 2 (TMPRSS2). The activity of this enzyme is essential for viral access into the host cells via angiotensin-converting enzyme 2 (ACE-2). Inhibiting the activity of this enzyme is therefore highly crucial for preventing viral fusion with ACE-2 thus shielding SARS-CoV-2 infectivity. 3D model of TMPRSS2 was constructed using I-TASSER, refined by GalaxyRefine, validated by Ramachandran plot server and overall model quality was checked by ProSA. 95 natural compounds from microalgae were virtually screened against the modeled protein that led to the identification 17 best leads capable of binding to TMPRSS2 with a good binding score comparable, greater or a bit lower than that of the standard inhibitor (camostat). Physicochemical properties, ADME (absorption, distribution, metabolism, excretion) and toxicity analysis revealed top 4 compounds including the reference drug with good pharmacokinetic and pharmacodynamic profiles. These compounds bind to the same pocket of the protein with a binding energy of -7.8 kcal/mol, -7.6 kcal/mol, -7.4 kcal/mol and -7.4 kcal/mol each for camostat, apigenin, catechin and epicatechin respectively. This study shed light on the potential of microalgal compounds against SARS-CoV-2. In vivo and invitro studies are required to developed SARS-CoV-2 drugs based on the structures of the compounds identified in this study. |
1212.5761 | Nabanita Dasgupta-Schubert | O.S. Castillo, E.M. Zaragoza, C. J. Alvarado, M. G. Barrera and N.
Dasgupta-Schubert | Foliar area measurement by a new technique that utilizes the
conservative nature of fresh leaf surface density | Submitted to Journal of Agricultural Engineering | International Agrophysics Vol. 28, Issue 4, pp. 413-421, 2014 | 10.2478/intag-2014-0032 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Leaf area LA, is a plant biometric index important to agroforestry and crop
production. Previous works have demonstrated the conservativeness of the
inverse of the product of the fresh leaf density and thickness, the so-called
Hughes constant, K. We use this fact to develop LAMM, an absolute method of LA
measurement, i.e. no regression fits or prior calibrations with planimeters.
Nor does it require drying the leaves. The concept involves the in situ
determination of K using geometrical shapes and their weights obtained from a
subset of fresh leaves of the set whose areas are desired. Subsequently the
LAs, at any desired stratification level, are derived by utilizing K and the
previously measured masses of the fresh leaves. The concept was first tested in
the simulated ideal case of complete planarity and uniform thickness by using
plastic film covered card-paper sheets. Next the species-specific
conservativeness of K over individual leaf zones and different leaf types from
leaves of plants from two species, Mandevilla splendens and Spathiphyllum
wallisii, was quantitatively validated. Using the global average K values, the
LA of these and additional plants, were obtained. LAMM was found to be a rapid,
simple, economic technique with accuracies, as measured for the geometrical
shapes, that were comparable to those obtained by the planimetric method that
utilizes digital image analysis, DIA. For the leaves themselves, there were no
statistically significant differences between the LAs measured by LAMM and by
the DIA and the linear correlation between the two methods was excellent.
| [
{
"created": "Sun, 23 Dec 2012 04:56:59 GMT",
"version": "v1"
}
] | 2014-11-06 | [
[
"Castillo",
"O. S.",
""
],
[
"Zaragoza",
"E. M.",
""
],
[
"Alvarado",
"C. J.",
""
],
[
"Barrera",
"M. G.",
""
],
[
"Dasgupta-Schubert",
"N.",
""
]
] | Leaf area LA, is a plant biometric index important to agroforestry and crop production. Previous works have demonstrated the conservativeness of the inverse of the product of the fresh leaf density and thickness, the so-called Hughes constant, K. We use this fact to develop LAMM, an absolute method of LA measurement, i.e. no regression fits or prior calibrations with planimeters. Nor does it require drying the leaves. The concept involves the in situ determination of K using geometrical shapes and their weights obtained from a subset of fresh leaves of the set whose areas are desired. Subsequently the LAs, at any desired stratification level, are derived by utilizing K and the previously measured masses of the fresh leaves. The concept was first tested in the simulated ideal case of complete planarity and uniform thickness by using plastic film covered card-paper sheets. Next the species-specific conservativeness of K over individual leaf zones and different leaf types from leaves of plants from two species, Mandevilla splendens and Spathiphyllum wallisii, was quantitatively validated. Using the global average K values, the LA of these and additional plants, were obtained. LAMM was found to be a rapid, simple, economic technique with accuracies, as measured for the geometrical shapes, that were comparable to those obtained by the planimetric method that utilizes digital image analysis, DIA. For the leaves themselves, there were no statistically significant differences between the LAs measured by LAMM and by the DIA and the linear correlation between the two methods was excellent. |
2111.10471 | Yiming Li | Yiming Li, Sanjiv J. Shah, Donna Arnett, Ryan Irvin and Yuan Luo | SNPs Filtered by Allele Frequency Improve the Prediction of Hypertension
Subtypes | Submitted to the 12th International Workshop on Biomedical and Health
Informatics (BHI 2021) | null | null | null | q-bio.QM cs.LG q-bio.PE stat.AP | http://creativecommons.org/licenses/by/4.0/ | Hypertension is the leading global cause of cardiovascular disease and
premature death. Distinct hypertension subtypes may vary in their prognoses and
require different treatments. An individual's risk for hypertension is
determined by genetic and environmental factors as well as their interactions.
In this work, we studied 911 African Americans and 1,171 European Americans in
the Hypertension Genetic Epidemiology Network (HyperGEN) cohort. We built
hypertension subtype classification models using both environmental variables
and sets of genetic features selected based on different criteria. The fitted
prediction models provided insights into the genetic landscape of hypertension
subtypes, which may aid personalized diagnosis and treatment of hypertension in
the future.
| [
{
"created": "Fri, 19 Nov 2021 23:01:47 GMT",
"version": "v1"
}
] | 2021-11-23 | [
[
"Li",
"Yiming",
""
],
[
"Shah",
"Sanjiv J.",
""
],
[
"Arnett",
"Donna",
""
],
[
"Irvin",
"Ryan",
""
],
[
"Luo",
"Yuan",
""
]
] | Hypertension is the leading global cause of cardiovascular disease and premature death. Distinct hypertension subtypes may vary in their prognoses and require different treatments. An individual's risk for hypertension is determined by genetic and environmental factors as well as their interactions. In this work, we studied 911 African Americans and 1,171 European Americans in the Hypertension Genetic Epidemiology Network (HyperGEN) cohort. We built hypertension subtype classification models using both environmental variables and sets of genetic features selected based on different criteria. The fitted prediction models provided insights into the genetic landscape of hypertension subtypes, which may aid personalized diagnosis and treatment of hypertension in the future. |
1509.03718 | Rajesh Karmakar | Rajesh Karmakar | Two different modes of oscillation in a gene transcription regulatory
network with interlinked positive and negative feedback loops | 10 pages, 7 figures | International Journal of Modern Physics C, Vol.27, No.5, (2016)
1650056 | 10.1142/S012918311650056X | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the oscillatory behaviour of a gene regulatory network with
interlinked positive and negative feedback loop. Frequency and amplitude are
two important properties of oscillation. Studied network produces two different
modes of oscillation. In one mode (mode 1) frequency remains constant over a
wide range amplitude and in other mode (mode 2) the amplitude of oscillation
remains constant over a wide range of frequency. Our study reproduces both
features of oscillations in a single gene regulatory network and show that the
negative plus positive feedback loops in gene regulatory network offer
additional advantage. We identified the key parameters/variables responsible
for different modes of oscillation. The network is flexible in switching
between different modes by choosing appropriately the required
parameters/variables.
| [
{
"created": "Sat, 12 Sep 2015 07:34:20 GMT",
"version": "v1"
}
] | 2015-12-21 | [
[
"Karmakar",
"Rajesh",
""
]
] | We study the oscillatory behaviour of a gene regulatory network with interlinked positive and negative feedback loop. Frequency and amplitude are two important properties of oscillation. Studied network produces two different modes of oscillation. In one mode (mode 1) frequency remains constant over a wide range amplitude and in other mode (mode 2) the amplitude of oscillation remains constant over a wide range of frequency. Our study reproduces both features of oscillations in a single gene regulatory network and show that the negative plus positive feedback loops in gene regulatory network offer additional advantage. We identified the key parameters/variables responsible for different modes of oscillation. The network is flexible in switching between different modes by choosing appropriately the required parameters/variables. |
0802.1892 | Dmitry Tsigankov | Dmitry Tsigankov and Alexei Koulakov | Sperry versus Hebb: Topographic mapping in Isl2/EphA3 mutant mice | 13 pages, 6 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In wild-type mice axons of retinal ganglion cells establish topographically
precise projection to the superior colliculus of the midbrain. This implies
that axons of neighboring retinal ganglion cells project to the proximal
locations in the target. The precision of topographic projection is a result of
combined effects of molecular labels, such as Eph receptors and ephrins, and
correlated electric activity. In the Isl2/EphA3 mutant mice the expression
levels of molecular labels is changed. As a result the topographic projection
is rewired so that the neighborhood relationships between retinal cell axons
are disrupted. Here we argue that the effects of correlated activity presenting
themselves in the form of Hebbian learning rules can facilitate the restoration
of the topographic connectivity even when the molecular labels carry
conflicting instructions. This occurs because the correlations in electric
activity carry information about retinal cells' spatial location that is
independent on molecular labels. We argue therefore that experiments in
Isl2/EphA3 knock-in mice directly test the interaction between effects of
molecular labels and correlated activity during the development of neural
connectivity.
| [
{
"created": "Wed, 13 Feb 2008 19:48:15 GMT",
"version": "v1"
}
] | 2008-02-14 | [
[
"Tsigankov",
"Dmitry",
""
],
[
"Koulakov",
"Alexei",
""
]
] | In wild-type mice axons of retinal ganglion cells establish topographically precise projection to the superior colliculus of the midbrain. This implies that axons of neighboring retinal ganglion cells project to the proximal locations in the target. The precision of topographic projection is a result of combined effects of molecular labels, such as Eph receptors and ephrins, and correlated electric activity. In the Isl2/EphA3 mutant mice the expression levels of molecular labels is changed. As a result the topographic projection is rewired so that the neighborhood relationships between retinal cell axons are disrupted. Here we argue that the effects of correlated activity presenting themselves in the form of Hebbian learning rules can facilitate the restoration of the topographic connectivity even when the molecular labels carry conflicting instructions. This occurs because the correlations in electric activity carry information about retinal cells' spatial location that is independent on molecular labels. We argue therefore that experiments in Isl2/EphA3 knock-in mice directly test the interaction between effects of molecular labels and correlated activity during the development of neural connectivity. |
2103.04964 | Giovanni Bussi | Mattia Bernetti, Kathleen B. Hall, and Giovanni Bussi | Reweighting of molecular simulations with explicit-solvent SAXS
restraints elucidates ion-dependent RNA ensembles | Supporting information included in ancillary files. This version
includes corrections implemented after receiving feedbacks from the community | Nucleic Acids Res. 49, e84 (2021) | 10.1093/nar/gkab459 | null | q-bio.BM physics.bio-ph physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Small-angle X-ray scattering (SAXS) experiments are increasingly used to
probe RNA structure. A number of \emph{forward models} that relate measured
SAXS intensities and structural features, and that are suitable to model either
explicit-solvent effects or solute dynamics, have been proposed in the past
years. Here we introduce an approach that integrates atomistic molecular
dynamics simulations and SAXS experiments to reconstruct RNA structural
ensembles while simultaneously accounting for both RNA conformational dynamics
and explicit-solvent effects. Our protocol exploits SAXS pure-solute forward
models and enhanced sampling methods to sample an heterogenous ensemble of
structures, with no information towards the experiments provided on-the-fly.
The generated structural ensemble is then reweighted through the maximum
entropy principle so as to match reference SAXS experimental data at multiple
ionic conditions. Importantly, accurate explicit-solvent forward models are
used at this reweighting stage. We apply this framework to the
GTPase-associated center, a relevant RNA molecule involved in protein
translation, in order to elucidate its ion-dependent conformational ensembles.
We show that (a) both solvent and dynamics are crucial to reproduce
experimental SAXS data and (b) the resulting dynamical ensembles contain an
ion-dependent fraction of extended structures.
| [
{
"created": "Mon, 8 Mar 2021 18:38:15 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Mar 2021 20:48:00 GMT",
"version": "v2"
}
] | 2022-07-26 | [
[
"Bernetti",
"Mattia",
""
],
[
"Hall",
"Kathleen B.",
""
],
[
"Bussi",
"Giovanni",
""
]
] | Small-angle X-ray scattering (SAXS) experiments are increasingly used to probe RNA structure. A number of \emph{forward models} that relate measured SAXS intensities and structural features, and that are suitable to model either explicit-solvent effects or solute dynamics, have been proposed in the past years. Here we introduce an approach that integrates atomistic molecular dynamics simulations and SAXS experiments to reconstruct RNA structural ensembles while simultaneously accounting for both RNA conformational dynamics and explicit-solvent effects. Our protocol exploits SAXS pure-solute forward models and enhanced sampling methods to sample an heterogenous ensemble of structures, with no information towards the experiments provided on-the-fly. The generated structural ensemble is then reweighted through the maximum entropy principle so as to match reference SAXS experimental data at multiple ionic conditions. Importantly, accurate explicit-solvent forward models are used at this reweighting stage. We apply this framework to the GTPase-associated center, a relevant RNA molecule involved in protein translation, in order to elucidate its ion-dependent conformational ensembles. We show that (a) both solvent and dynamics are crucial to reproduce experimental SAXS data and (b) the resulting dynamical ensembles contain an ion-dependent fraction of extended structures. |
2304.00970 | Andy Liaw | Yuting Xu, Andy Liaw, Robert P. Sheridan, Vladimir Svetnik | Development and Evaluation of Conformal Prediction Methods for QSAR | null | null | null | null | q-bio.BM cs.LG q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The quantitative structure-activity relationship (QSAR) regression model is a
commonly used technique for predicting biological activities of compounds using
their molecular descriptors. Predictions from QSAR models can help, for
example, to optimize molecular structure; prioritize compounds for further
experimental testing; and estimate their toxicity. In addition to the accurate
estimation of the activity, it is highly desirable to obtain some estimate of
the uncertainty associated with the prediction, e.g., calculate a prediction
interval (PI) containing the true molecular activity with a pre-specified
probability, say 70%, 90% or 95%. The challenge is that most machine learning
(ML) algorithms that achieve superior predictive performance require some
add-on methods for estimating uncertainty of their prediction. The development
of these algorithms is an active area of research by statistical and ML
communities but their implementation for QSAR modeling remains limited.
Conformal prediction (CP) is a promising approach. It is agnostic to the
prediction algorithm and can produce valid prediction intervals under some weak
assumptions on the data distribution. We proposed computationally efficient CP
algorithms tailored to the most advanced ML models, including Deep Neural
Networks and Gradient Boosting Machines. The validity and efficiency of
proposed conformal predictors are demonstrated on a diverse collection of QSAR
datasets as well as simulation studies.
| [
{
"created": "Mon, 3 Apr 2023 13:41:09 GMT",
"version": "v1"
}
] | 2023-04-04 | [
[
"Xu",
"Yuting",
""
],
[
"Liaw",
"Andy",
""
],
[
"Sheridan",
"Robert P.",
""
],
[
"Svetnik",
"Vladimir",
""
]
] | The quantitative structure-activity relationship (QSAR) regression model is a commonly used technique for predicting biological activities of compounds using their molecular descriptors. Predictions from QSAR models can help, for example, to optimize molecular structure; prioritize compounds for further experimental testing; and estimate their toxicity. In addition to the accurate estimation of the activity, it is highly desirable to obtain some estimate of the uncertainty associated with the prediction, e.g., calculate a prediction interval (PI) containing the true molecular activity with a pre-specified probability, say 70%, 90% or 95%. The challenge is that most machine learning (ML) algorithms that achieve superior predictive performance require some add-on methods for estimating uncertainty of their prediction. The development of these algorithms is an active area of research by statistical and ML communities but their implementation for QSAR modeling remains limited. Conformal prediction (CP) is a promising approach. It is agnostic to the prediction algorithm and can produce valid prediction intervals under some weak assumptions on the data distribution. We proposed computationally efficient CP algorithms tailored to the most advanced ML models, including Deep Neural Networks and Gradient Boosting Machines. The validity and efficiency of proposed conformal predictors are demonstrated on a diverse collection of QSAR datasets as well as simulation studies. |
1311.1815 | Brian Williams Dr | Brian G. Williams and Eleanor Gouws | Ending AIDS in South Africa: How long will it take? How much will it
cost? | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | South Africa has more people infected with HIV but, by providing access to
anti-retroviral therapy (ART), has kept more people alive than any other
country. The effectiveness, availability and affordability of potent
anti-retroviral therapy (ART) make it possible to contemplate ending the
epidemic of HIV/AIDS. We consider what would have happened without ART, the
impact of the current roll-out of ART, what might be possible if early
treatment becomes available to all, and what could have happened if ART had
been provided much earlier in the epidemic. In 2013 the provision of ART has
reduced the prevalence of HIV from an estimated 15% to 9% among adults not on
ART, the annual incidence from 2% to 0.9%, and the AIDS related deaths from
0.9% to 0.3% p.a. saving 1.5 million lives and USD727M. Regular testing and
universal access to ART could reduce the prevalence among adults not on ART in
2023 to 0.06%, annual incidence to 0.05%, and eliminate AIDS deaths. Cumulative
costs between 2013 ands 2023 would increase by USD692M only 4% of the total
cost of USD17Bn. If a universal testing and early treatment had started in 1998
the prevalence of HIV among adults not on ART in 2013 would have fallen to
0.03%, annual incidence to 0.03%, and saved 2.5 million lives. The cost up to
2013 would have increased by USD18Bn but this would have been cost effective at
US$7,200 per life saved. Future surveys of HIV among women attending ante-natal
clinics should include testing women for the presence of anti-retroviral drugs,
measuring their viral loads, and using appropriate assays for estimating HIV
incidence. These data would make it possible to develop better and more
reliable estimates of the current state of the epidemic, the success of the
current ART programme, levels of viral load suppression for those on ART and
the incidence of infection.
| [
{
"created": "Thu, 7 Nov 2013 18:57:20 GMT",
"version": "v1"
}
] | 2013-11-11 | [
[
"Williams",
"Brian G.",
""
],
[
"Gouws",
"Eleanor",
""
]
] | South Africa has more people infected with HIV but, by providing access to anti-retroviral therapy (ART), has kept more people alive than any other country. The effectiveness, availability and affordability of potent anti-retroviral therapy (ART) make it possible to contemplate ending the epidemic of HIV/AIDS. We consider what would have happened without ART, the impact of the current roll-out of ART, what might be possible if early treatment becomes available to all, and what could have happened if ART had been provided much earlier in the epidemic. In 2013 the provision of ART has reduced the prevalence of HIV from an estimated 15% to 9% among adults not on ART, the annual incidence from 2% to 0.9%, and the AIDS related deaths from 0.9% to 0.3% p.a. saving 1.5 million lives and USD727M. Regular testing and universal access to ART could reduce the prevalence among adults not on ART in 2023 to 0.06%, annual incidence to 0.05%, and eliminate AIDS deaths. Cumulative costs between 2013 ands 2023 would increase by USD692M only 4% of the total cost of USD17Bn. If a universal testing and early treatment had started in 1998 the prevalence of HIV among adults not on ART in 2013 would have fallen to 0.03%, annual incidence to 0.03%, and saved 2.5 million lives. The cost up to 2013 would have increased by USD18Bn but this would have been cost effective at US$7,200 per life saved. Future surveys of HIV among women attending ante-natal clinics should include testing women for the presence of anti-retroviral drugs, measuring their viral loads, and using appropriate assays for estimating HIV incidence. These data would make it possible to develop better and more reliable estimates of the current state of the epidemic, the success of the current ART programme, levels of viral load suppression for those on ART and the incidence of infection. |
1802.08894 | Gabriele Partel | Gabriele Partel (1), Giorgia Milli (2) and Carolina W\"ahlby ((1)
Centre for Image Analysis, Uppsala University, Sweden, (2) Politecnico di
Torino, Italy) | Improving Recall of In Situ Sequencing by Self-Learned Features and a
Graphical Model | 4 pages, 3 figures | null | null | null | q-bio.QM cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image-based sequencing of mRNA makes it possible to see where in a tissue
sample a given gene is active, and thus discern large numbers of different cell
types in parallel. This is crucial for gaining a better understanding of tissue
development and disease such as cancer. Signals are collected over multiple
staining and imaging cycles, and signal density together with noise makes
signal decoding challenging. Previous approaches have led to low signal recall
in efforts to maintain high sensitivity. We propose an approach where signal
candidates are generously included, and true-signal probability at the cycle
level is self-learned using a convolutional neural network. Signal candidates
and probability predictions are thereafter fed into a graphical model searching
for signal candidates across sequencing cycles. The graphical model combines
intensity, probability and spatial distance to find optimal paths representing
decoded signal sequences. We evaluate our approach in relation to
state-of-the-art, and show that we increase recall by $27\%$ at maintained
sensitivity. Furthermore, visual examination shows that most of the now
correctly resolved signals were previously lost due to high signal density.
Thus, the proposed approach has the potential to significantly improve further
analysis of spatial statistics in in situ sequencing experiments.
| [
{
"created": "Sat, 24 Feb 2018 18:53:56 GMT",
"version": "v1"
}
] | 2018-02-27 | [
[
"Partel",
"Gabriele",
""
],
[
"Milli",
"Giorgia",
""
],
[
"Wählby",
"Carolina",
""
]
] | Image-based sequencing of mRNA makes it possible to see where in a tissue sample a given gene is active, and thus discern large numbers of different cell types in parallel. This is crucial for gaining a better understanding of tissue development and disease such as cancer. Signals are collected over multiple staining and imaging cycles, and signal density together with noise makes signal decoding challenging. Previous approaches have led to low signal recall in efforts to maintain high sensitivity. We propose an approach where signal candidates are generously included, and true-signal probability at the cycle level is self-learned using a convolutional neural network. Signal candidates and probability predictions are thereafter fed into a graphical model searching for signal candidates across sequencing cycles. The graphical model combines intensity, probability and spatial distance to find optimal paths representing decoded signal sequences. We evaluate our approach in relation to state-of-the-art, and show that we increase recall by $27\%$ at maintained sensitivity. Furthermore, visual examination shows that most of the now correctly resolved signals were previously lost due to high signal density. Thus, the proposed approach has the potential to significantly improve further analysis of spatial statistics in in situ sequencing experiments. |
2301.12638 | Carina Curto | Carina Curto and Katherine Morrison | Graph rules for recurrent neural network dynamics: extended version | 32 pages (double-column), 25 figures, 2 tables | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This is an extended version of our survey article, "Graph rules for recurrent
neural network dynamics," to appear in the April 2023 edition of the Notices of
the AMS. It includes additional results, derivations, figures, references, and
a set of open questions.
| [
{
"created": "Mon, 30 Jan 2023 03:43:54 GMT",
"version": "v1"
}
] | 2023-01-31 | [
[
"Curto",
"Carina",
""
],
[
"Morrison",
"Katherine",
""
]
] | This is an extended version of our survey article, "Graph rules for recurrent neural network dynamics," to appear in the April 2023 edition of the Notices of the AMS. It includes additional results, derivations, figures, references, and a set of open questions. |
2112.12957 | Hong-Li Zeng | Hong-Li Zeng, Yue Liu, Vito Dichio and Erik Aurell | Temporal epistasis inference from more than 3,500,000 SARS-CoV-2 Genomic
Sequences | 15 pages, 9 figures | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | We use Direct Coupling Analysis (DCA) to determine epistatic interactions
between loci of variability of the SARS-CoV-2 virus, segmenting genomes by
month of sampling. We use full-length, high-quality genomes from the GISAID
repository up to October 2021, in total over 3,500,000 genomes. We find that
DCA terms are more stable over time than correlations, but nevertheless change
over time as mutations disappear from the global population or reach fixation.
Correlations are enriched for phylogenetic effects, and in particularly
statistical dependencies at short genomic distances, while DCA brings out links
at longer genomic distance. We discuss the validity of a DCA analysis under
these conditions in terms of a transient Quasi-Linkage Equilibrium state. We
identify putative epistatic interaction mutations involving loci in Spike.
| [
{
"created": "Fri, 24 Dec 2021 05:57:17 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Jun 2022 01:25:56 GMT",
"version": "v2"
}
] | 2022-06-07 | [
[
"Zeng",
"Hong-Li",
""
],
[
"Liu",
"Yue",
""
],
[
"Dichio",
"Vito",
""
],
[
"Aurell",
"Erik",
""
]
] | We use Direct Coupling Analysis (DCA) to determine epistatic interactions between loci of variability of the SARS-CoV-2 virus, segmenting genomes by month of sampling. We use full-length, high-quality genomes from the GISAID repository up to October 2021, in total over 3,500,000 genomes. We find that DCA terms are more stable over time than correlations, but nevertheless change over time as mutations disappear from the global population or reach fixation. Correlations are enriched for phylogenetic effects, and in particularly statistical dependencies at short genomic distances, while DCA brings out links at longer genomic distance. We discuss the validity of a DCA analysis under these conditions in terms of a transient Quasi-Linkage Equilibrium state. We identify putative epistatic interaction mutations involving loci in Spike. |
2405.06718 | Abdul Samad | Areesha Naveed, Ayesha Haidar, Rameen Atique, Arshi Saeed, Bushra
Anwar, Ambreen Talib, Uzma Bilal, Javeria Sharif, Ayesha Nadeem, Sania Tariq,
Ayesha Muazzam, Abdul Samad | Vector-borne threats: Sustainable approaches to their diagnosis and
treatment | 4 Figure, 1 table | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by-sa/4.0/ | Arbovirus is a vital, life-threatening disease worldwide and continues to be
a significant problem while the world is dealing with the major coronavirus
(COVID-19) pandemic. Vectors, mostly mosquitoes and ticks, transmit this
disease. Dengue fever, chikungunya, and Zika viruses are the major threats
because of their high incidence, public health burden, and clinically
significant disease spectrum. These vector-borne disease causes one-fourth of
annual deaths, leading to various infectious diseases. The arbovirus represents
eight different families and 14 genera; most viruses belong to the family
Bunyaviridae, and some also belong to Togaviridae, Reoviridae, and
Flaviviridae. The arbovirus disease was isolated first in tropical and
subtropical regions of South America and Africa and has high significance
because of suitable environmental conditions for virus transmission and vector
expansion. Its transmission cycle ranges from simple to highly complex. DENV is
the most prevalent, results in febrile illness, and has transmission in 128
different countries. CHIKV causes infection in asymptomatic people, and the
problems include nephritis, arthritis, myelitis, and acute encephalopathy.
ZIKV-infected 80% of people are asymptomatic and may cause rashes, myalgia,
fever, headache, and conjunctivitis. Vaccines for DENV are not clinically
available; it is a primary arboviral infection in the world nowadays. The
exposure of arbovirus diseases continues to be a global health problem
regardless of continuing efforts. This review article will overview major
arbovirus diseases and their diagnosis, treatment, and prevention strategies.
| [
{
"created": "Fri, 10 May 2024 02:51:26 GMT",
"version": "v1"
},
{
"created": "Tue, 14 May 2024 01:56:36 GMT",
"version": "v2"
}
] | 2024-05-15 | [
[
"Naveed",
"Areesha",
""
],
[
"Haidar",
"Ayesha",
""
],
[
"Atique",
"Rameen",
""
],
[
"Saeed",
"Arshi",
""
],
[
"Anwar",
"Bushra",
""
],
[
"Talib",
"Ambreen",
""
],
[
"Bilal",
"Uzma",
""
],
[
"Sharif",
"Javeria",
""
],
[
"Nadeem",
"Ayesha",
""
],
[
"Tariq",
"Sania",
""
],
[
"Muazzam",
"Ayesha",
""
],
[
"Samad",
"Abdul",
""
]
] | Arbovirus is a vital, life-threatening disease worldwide and continues to be a significant problem while the world is dealing with the major coronavirus (COVID-19) pandemic. Vectors, mostly mosquitoes and ticks, transmit this disease. Dengue fever, chikungunya, and Zika viruses are the major threats because of their high incidence, public health burden, and clinically significant disease spectrum. These vector-borne disease causes one-fourth of annual deaths, leading to various infectious diseases. The arbovirus represents eight different families and 14 genera; most viruses belong to the family Bunyaviridae, and some also belong to Togaviridae, Reoviridae, and Flaviviridae. The arbovirus disease was isolated first in tropical and subtropical regions of South America and Africa and has high significance because of suitable environmental conditions for virus transmission and vector expansion. Its transmission cycle ranges from simple to highly complex. DENV is the most prevalent, results in febrile illness, and has transmission in 128 different countries. CHIKV causes infection in asymptomatic people, and the problems include nephritis, arthritis, myelitis, and acute encephalopathy. ZIKV-infected 80% of people are asymptomatic and may cause rashes, myalgia, fever, headache, and conjunctivitis. Vaccines for DENV are not clinically available; it is a primary arboviral infection in the world nowadays. The exposure of arbovirus diseases continues to be a global health problem regardless of continuing efforts. This review article will overview major arbovirus diseases and their diagnosis, treatment, and prevention strategies. |
2312.11256 | Perrine Paul-Gilloteaux | Chong Zhang, Alban Gaignard, Matus Kalas, Florian Levet, Felipe
Delestro, Joakim Lindblad, Natasa Sladoje, Laure Plantard, Alain Latour,
Robert Haase, Gabriel Martins, Paula Sampaio, Leandro Scholz, NEUBIAS
taggers, S\'ebastien Tosi, Kota Miura, Julien Colombelli, Perrine
Paul-Gilloteaux | Bio-Image Informatics Index BIII: A unique database of image analysis
tools and workflows for and by the bioimaging community | 5 pages of main article including one figure and references, followed
by the lis of taggers, the description of the ontologies in uses and some
example of usage | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-sa/4.0/ | Bio image analysis has recently become one keystone of biological research
but biologists tend to get lost in a plethora of available software and the way
to adjust available tools to their own image analysis problem. We present BIII,
BioImage Informatic Index (www.biii.eu), the result of the first large
community effort to bridge the communities of algorithm and software
developers, bioimage analysts and biologists, under the form of a web-based
knowledge database crowdsourced by these communities. Software tools (> 1300),
image databases for benchmarking (>20) and training materials (>70) for bio
image analysis are referenced and curated following standards constructed by
the community and then reaching a broader audience. Software tools are
organized as full protocol of analysis (workflow), specific brick (component)
to construct a workflow, or software platform or library (collection). They are
described using Edam Bio Imaging, which is iteratively defined using this
website. All entries are exposed following FAIR principles and accessible for
other usage.
| [
{
"created": "Mon, 18 Dec 2023 14:53:38 GMT",
"version": "v1"
}
] | 2023-12-19 | [
[
"Zhang",
"Chong",
""
],
[
"Gaignard",
"Alban",
""
],
[
"Kalas",
"Matus",
""
],
[
"Levet",
"Florian",
""
],
[
"Delestro",
"Felipe",
""
],
[
"Lindblad",
"Joakim",
""
],
[
"Sladoje",
"Natasa",
""
],
[
"Plantard",
"Laure",
""
],
[
"Latour",
"Alain",
""
],
[
"Haase",
"Robert",
""
],
[
"Martins",
"Gabriel",
""
],
[
"Sampaio",
"Paula",
""
],
[
"Scholz",
"Leandro",
""
],
[
"taggers",
"NEUBIAS",
""
],
[
"Tosi",
"Sébastien",
""
],
[
"Miura",
"Kota",
""
],
[
"Colombelli",
"Julien",
""
],
[
"Paul-Gilloteaux",
"Perrine",
""
]
] | Bio image analysis has recently become one keystone of biological research but biologists tend to get lost in a plethora of available software and the way to adjust available tools to their own image analysis problem. We present BIII, BioImage Informatic Index (www.biii.eu), the result of the first large community effort to bridge the communities of algorithm and software developers, bioimage analysts and biologists, under the form of a web-based knowledge database crowdsourced by these communities. Software tools (> 1300), image databases for benchmarking (>20) and training materials (>70) for bio image analysis are referenced and curated following standards constructed by the community and then reaching a broader audience. Software tools are organized as full protocol of analysis (workflow), specific brick (component) to construct a workflow, or software platform or library (collection). They are described using Edam Bio Imaging, which is iteratively defined using this website. All entries are exposed following FAIR principles and accessible for other usage. |
1507.00947 | Maroussia Favre | Didier Sornette and Maroussia Favre | Cancer risk is not (just) bad luck | null | EPJ Nonlinear Biomedical Physics 2015 3:10 | 10.1140/epjnbp/s40366-015-0026-0 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tomasetti and Vogelstein recently proposed that the majority of variation in
cancer risk among tissues is due to "bad luck," that is, random mutations
arising during DNA replication in normal noncancerous stem cells. They
generalize this finding to cancer overall, claiming that "the stochastic
effects of DNA replication appear to be the major contributor to cancer in
humans." We show that this conclusion results from a logical fallacy based on
ignoring the influence of population heterogeneity in correlations exhibited at
the level of the whole population. Because environmental and genetic factors
cannot explain the huge differences in cancer rates between different organs,
it is wrong to conclude that these factors play a minor role in cancer rates.
In contrast, we show that one can indeed measure huge differences in cancer
rates between different organs and, at the same time, observe a strong effect
of environmental and genetic factors in cancer rates.
| [
{
"created": "Fri, 3 Jul 2015 15:28:18 GMT",
"version": "v1"
}
] | 2016-12-13 | [
[
"Sornette",
"Didier",
""
],
[
"Favre",
"Maroussia",
""
]
] | Tomasetti and Vogelstein recently proposed that the majority of variation in cancer risk among tissues is due to "bad luck," that is, random mutations arising during DNA replication in normal noncancerous stem cells. They generalize this finding to cancer overall, claiming that "the stochastic effects of DNA replication appear to be the major contributor to cancer in humans." We show that this conclusion results from a logical fallacy based on ignoring the influence of population heterogeneity in correlations exhibited at the level of the whole population. Because environmental and genetic factors cannot explain the huge differences in cancer rates between different organs, it is wrong to conclude that these factors play a minor role in cancer rates. In contrast, we show that one can indeed measure huge differences in cancer rates between different organs and, at the same time, observe a strong effect of environmental and genetic factors in cancer rates. |
q-bio/0512024 | Michael Sadovsky | Marina G.Erunova, Michael G.Sadovsky, Anna A.Gosteva | GIS-aided simulation of spatial distribution of some pollutants at
"Stolby" state reservation | 17 pages, 49 reference items, 12 figures | null | null | null | q-bio.QM q-bio.PE | null | Reserved territories seem to be best reference sites of wildnature, where the
long-term observations are carried out. Simulation model of spatially
distributed processes of contamination of the state reservation is developed,
and the dynamics of some pollutants is studied. An issue of the generalized
evaluation of an ecological system status is discussed.
| [
{
"created": "Sat, 10 Dec 2005 10:33:41 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Erunova",
"Marina G.",
""
],
[
"Sadovsky",
"Michael G.",
""
],
[
"Gosteva",
"Anna A.",
""
]
] | Reserved territories seem to be best reference sites of wildnature, where the long-term observations are carried out. Simulation model of spatially distributed processes of contamination of the state reservation is developed, and the dynamics of some pollutants is studied. An issue of the generalized evaluation of an ecological system status is discussed. |
2304.10494 | Haotian Zhang | Haotian Zhang, Jintu Zhang, Huifeng Zhao, Dejun Jiang, Yafeng Deng | Infinite Physical Monkey: Do Deep Learning Methods Really Perform Better
in Conformation Generation? | null | null | null | null | q-bio.BM cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conformation Generation is a fundamental problem in drug discovery and
cheminformatics. And organic molecule conformation generation, particularly in
vacuum and protein pocket environments, is most relevant to drug design.
Recently, with the development of geometric neural networks, the data-driven
schemes have been successfully applied in this field, both for molecular
conformation generation (in vacuum) and binding pose generation (in protein
pocket). The former beats the traditional ETKDG method, while the latter
achieves similar accuracy compared with the widely used molecular docking
software. Although these methods have shown promising results, some researchers
have recently questioned whether deep learning (DL) methods perform better in
molecular conformation generation via a parameter-free method. To our surprise,
what they have designed is some kind analogous to the famous infinite monkey
theorem, the monkeys that are even equipped with physics education. To discuss
the feasibility of their proving, we constructed a real infinite stochastic
monkey for molecular conformation generation, showing that even with a more
stochastic sampler for geometry generation, the coverage of the benchmark
QM-computed conformations are higher than those of most DL-based methods. By
extending their physical monkey algorithm for binding pose prediction, we also
discover that the successful docking rate also achieves near-best performance
among existing DL-based docking models. Thus, though their conclusions are
right, their proof process needs more concern.
| [
{
"created": "Wed, 8 Mar 2023 02:09:58 GMT",
"version": "v1"
}
] | 2023-04-21 | [
[
"Zhang",
"Haotian",
""
],
[
"Zhang",
"Jintu",
""
],
[
"Zhao",
"Huifeng",
""
],
[
"Jiang",
"Dejun",
""
],
[
"Deng",
"Yafeng",
""
]
] | Conformation Generation is a fundamental problem in drug discovery and cheminformatics. And organic molecule conformation generation, particularly in vacuum and protein pocket environments, is most relevant to drug design. Recently, with the development of geometric neural networks, the data-driven schemes have been successfully applied in this field, both for molecular conformation generation (in vacuum) and binding pose generation (in protein pocket). The former beats the traditional ETKDG method, while the latter achieves similar accuracy compared with the widely used molecular docking software. Although these methods have shown promising results, some researchers have recently questioned whether deep learning (DL) methods perform better in molecular conformation generation via a parameter-free method. To our surprise, what they have designed is some kind analogous to the famous infinite monkey theorem, the monkeys that are even equipped with physics education. To discuss the feasibility of their proving, we constructed a real infinite stochastic monkey for molecular conformation generation, showing that even with a more stochastic sampler for geometry generation, the coverage of the benchmark QM-computed conformations are higher than those of most DL-based methods. By extending their physical monkey algorithm for binding pose prediction, we also discover that the successful docking rate also achieves near-best performance among existing DL-based docking models. Thus, though their conclusions are right, their proof process needs more concern. |
2004.11868 | Duncan Ralph | Duncan K. Ralph and Frederick A. Matsen IV | Using B cell receptor lineage structures to predict affinity | null | null | 10.1371/journal.pcbi.1008391 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We are frequently faced with a large collection of antibodies, and want to
select those with highest affinity for their cognate antigen. When developing a
first-line therapeutic for a novel pathogen, for instance, we might look for
such antibodies in patients that have recovered. There exist effective
experimental methods of accomplishing this, such as cell sorting and baiting;
however they are time consuming and expensive. Next generation sequencing of B
cell receptor (BCR) repertoires offers an additional source of sequences that
could be tapped if we had a reliable method of selecting those coding for the
best antibodies. In this paper we introduce a method that uses evolutionary
information from the family of related sequences that share a naive ancestor to
predict the affinity of each resulting antibody for its antigen. When combined
with information on the identity of the antigen, this method should provide a
source of effective new antibodies. We also introduce a method for a related
task: given an antibody of interest and its inferred ancestral lineage, which
branches in the tree are likely to harbor key affinity-increasing mutations?
These methods are implemented as part of continuing development of the partis
BCR inference package, available at https://github.com/psathyrella/partis.
| [
{
"created": "Fri, 24 Apr 2020 17:21:36 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Jul 2020 18:46:51 GMT",
"version": "v2"
}
] | 2021-01-27 | [
[
"Ralph",
"Duncan K.",
""
],
[
"Matsen",
"Frederick A.",
"IV"
]
] | We are frequently faced with a large collection of antibodies, and want to select those with highest affinity for their cognate antigen. When developing a first-line therapeutic for a novel pathogen, for instance, we might look for such antibodies in patients that have recovered. There exist effective experimental methods of accomplishing this, such as cell sorting and baiting; however they are time consuming and expensive. Next generation sequencing of B cell receptor (BCR) repertoires offers an additional source of sequences that could be tapped if we had a reliable method of selecting those coding for the best antibodies. In this paper we introduce a method that uses evolutionary information from the family of related sequences that share a naive ancestor to predict the affinity of each resulting antibody for its antigen. When combined with information on the identity of the antigen, this method should provide a source of effective new antibodies. We also introduce a method for a related task: given an antibody of interest and its inferred ancestral lineage, which branches in the tree are likely to harbor key affinity-increasing mutations? These methods are implemented as part of continuing development of the partis BCR inference package, available at https://github.com/psathyrella/partis. |
2007.05112 | William Podlaski | William F. Podlaski, Christian K. Machens | Biological credit assignment through dynamic inversion of feedforward
networks | 34th Conference on Neural Information Processing Systems (NeurIPS
2020), Vancouver, Canada | null | null | null | q-bio.NC cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning depends on changes in synaptic connections deep inside the brain. In
multilayer networks, these changes are triggered by error signals fed back from
the output, generally through a stepwise inversion of the feedforward
processing steps. The gold standard for this process -- backpropagation --
works well in artificial neural networks, but is biologically implausible.
Several recent proposals have emerged to address this problem, but many of
these biologically-plausible schemes are based on learning an independent set
of feedback connections. This complicates the assignment of errors to each
synapse by making it dependent upon a second learning problem, and by fitting
inversions rather than guaranteeing them. Here, we show that feedforward
network transformations can be effectively inverted through dynamics. We derive
this dynamic inversion from the perspective of feedback control, where the
forward transformation is reused and dynamically interacts with fixed or random
feedback to propagate error signals during the backward pass. Importantly, this
scheme does not rely upon a second learning problem for feedback because
accurate inversion is guaranteed through the network dynamics. We map these
dynamics onto generic feedforward networks, and show that the resulting
algorithm performs well on several supervised and unsupervised datasets.
Finally, we discuss potential links between dynamic inversion and second-order
optimization. Overall, our work introduces an alternative perspective on credit
assignment in the brain, and proposes a special role for temporal dynamics and
feedback control during learning.
| [
{
"created": "Fri, 10 Jul 2020 00:03:01 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Jan 2021 00:31:24 GMT",
"version": "v2"
}
] | 2021-01-05 | [
[
"Podlaski",
"William F.",
""
],
[
"Machens",
"Christian K.",
""
]
] | Learning depends on changes in synaptic connections deep inside the brain. In multilayer networks, these changes are triggered by error signals fed back from the output, generally through a stepwise inversion of the feedforward processing steps. The gold standard for this process -- backpropagation -- works well in artificial neural networks, but is biologically implausible. Several recent proposals have emerged to address this problem, but many of these biologically-plausible schemes are based on learning an independent set of feedback connections. This complicates the assignment of errors to each synapse by making it dependent upon a second learning problem, and by fitting inversions rather than guaranteeing them. Here, we show that feedforward network transformations can be effectively inverted through dynamics. We derive this dynamic inversion from the perspective of feedback control, where the forward transformation is reused and dynamically interacts with fixed or random feedback to propagate error signals during the backward pass. Importantly, this scheme does not rely upon a second learning problem for feedback because accurate inversion is guaranteed through the network dynamics. We map these dynamics onto generic feedforward networks, and show that the resulting algorithm performs well on several supervised and unsupervised datasets. Finally, we discuss potential links between dynamic inversion and second-order optimization. Overall, our work introduces an alternative perspective on credit assignment in the brain, and proposes a special role for temporal dynamics and feedback control during learning. |
1801.09997 | Wilten Nicola | Wilten Nicola, Peter Hellyer, Sue Ann Campbell, Claudia Clopath | Chaos in Homeostatically Regulated Neural Systems | 25 pages, 5 figures | null | 10.1063/1.5026489 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Low-dimensional yet rich dynamics often emerge in the brain. Examples include
oscillations and chaotic dynamics during sleep, epilepsy, and voluntary
movement. However, a general mechanism for the emergence of low dimensional
dynamics remains elusive. Here, we consider Wilson-Cowan networks and
demonstrate through numerical and analytical work that a type of homeostatic
regulation of the network firing rates can paradoxically lead to a rich
dynamical repertoire. The dynamics include mixed-mode oscillations, mixed-mode
chaos, and chaotic synchronization. This is true for single recurrently coupled
node, pairs of reciprocally coupled nodes without self-coupling, and networks
coupled through experimentally determined weights derived from functional
magnetic resonance imaging data. In all cases, the stability of the homeostatic
set point is analytically determined or approximated. The dynamics at the
network level are directly determined by the behavior of a single node system
through synchronization in both oscillatory and non-oscillatory states. Our
results demonstrate that rich dynamics can be preserved under homeostatic
regulation or even be caused by homeostatic regulation.
| [
{
"created": "Tue, 30 Jan 2018 14:20:37 GMT",
"version": "v1"
}
] | 2018-08-29 | [
[
"Nicola",
"Wilten",
""
],
[
"Hellyer",
"Peter",
""
],
[
"Campbell",
"Sue Ann",
""
],
[
"Clopath",
"Claudia",
""
]
] | Low-dimensional yet rich dynamics often emerge in the brain. Examples include oscillations and chaotic dynamics during sleep, epilepsy, and voluntary movement. However, a general mechanism for the emergence of low dimensional dynamics remains elusive. Here, we consider Wilson-Cowan networks and demonstrate through numerical and analytical work that a type of homeostatic regulation of the network firing rates can paradoxically lead to a rich dynamical repertoire. The dynamics include mixed-mode oscillations, mixed-mode chaos, and chaotic synchronization. This is true for single recurrently coupled node, pairs of reciprocally coupled nodes without self-coupling, and networks coupled through experimentally determined weights derived from functional magnetic resonance imaging data. In all cases, the stability of the homeostatic set point is analytically determined or approximated. The dynamics at the network level are directly determined by the behavior of a single node system through synchronization in both oscillatory and non-oscillatory states. Our results demonstrate that rich dynamics can be preserved under homeostatic regulation or even be caused by homeostatic regulation. |
1409.4137 | Yang-Yu Liu | Gang Yan, Neo D. Martinez, Yang-Yu Liu | Stability of Degree Heterogeneous Ecological Networks | 20 pages, 5 figures | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A classic measure of ecological stability describes the tendency of a
community to return to equilibrium after small perturbation. While many
advances show how the network structure of these communities severely
constrains such tendencies, few if any of these advances address one of the
most fundamental properties of network structure: heterogeneity among nodes
with different numbers of links. Here we systematically explore this property
of "degree heterogeneity" and find that its effects on stability systematically
vary with different types of interspecific interactions. Degree heterogeneity
is always destabilizing in ecological networks with both competitive and
mutualistic interactions while its effects on networks of predator-prey
interactions such as food webs depend on prey contiguity, i.e., the extent to
which the species consume an unbroken sequence of prey in community niche
space. Increasing degree heterogeneity stabilizes food webs except those with
the most contiguity. These findings help explain previously unexplained
observations that food webs are highly but not completely contiguous and, more
broadly, deepens our understanding of the stability of complex ecological
networks with important implications for other types of dynamical systems.
| [
{
"created": "Mon, 15 Sep 2014 01:57:43 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Sep 2014 11:57:45 GMT",
"version": "v2"
},
{
"created": "Mon, 22 Sep 2014 10:16:34 GMT",
"version": "v3"
},
{
"created": "Fri, 5 Jun 2015 23:56:17 GMT",
"version": "v4"
}
] | 2015-06-09 | [
[
"Yan",
"Gang",
""
],
[
"Martinez",
"Neo D.",
""
],
[
"Liu",
"Yang-Yu",
""
]
] | A classic measure of ecological stability describes the tendency of a community to return to equilibrium after small perturbation. While many advances show how the network structure of these communities severely constrains such tendencies, few if any of these advances address one of the most fundamental properties of network structure: heterogeneity among nodes with different numbers of links. Here we systematically explore this property of "degree heterogeneity" and find that its effects on stability systematically vary with different types of interspecific interactions. Degree heterogeneity is always destabilizing in ecological networks with both competitive and mutualistic interactions while its effects on networks of predator-prey interactions such as food webs depend on prey contiguity, i.e., the extent to which the species consume an unbroken sequence of prey in community niche space. Increasing degree heterogeneity stabilizes food webs except those with the most contiguity. These findings help explain previously unexplained observations that food webs are highly but not completely contiguous and, more broadly, deepens our understanding of the stability of complex ecological networks with important implications for other types of dynamical systems. |
1908.00496 | Arvind Ramanathan | Heng Ma, Debsindhu Bhowmik, Hyungro Lee, Matteo Turilli, Michael T.
Young, Shantenu Jha, Arvind Ramanathan | Deep Generative Model Driven Protein Folding Simulation | 3 figures, 2 tables | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Significant progress in computer hardware and software have enabled molecular
dynamics (MD) simulations to model complex biological phenomena such as protein
folding. However, enabling MD simulations to access biologically relevant
timescales (e.g., beyond milliseconds) still remains challenging. These
limitations include (1) quantifying which set of states have already been
(sufficiently) sampled in an ensemble of MD runs, and (2) identifying novel
states from which simulations can be initiated to sample rare events (e.g.,
sampling folding events). With the recent success of deep learning and
artificial intelligence techniques in analyzing large datasets, we posit that
these techniques can also be used to adaptively guide MD simulations to model
such complex biological phenomena. Leveraging our recently developed
unsupervised deep learning technique to cluster protein folding trajectories
into partially folded intermediates, we build an iterative workflow that
enables our generative model to be coupled with all-atom MD simulations to fold
small protein systems on emerging high performance computing platforms. We
demonstrate our approach in folding Fs-peptide and the $\beta\beta\alpha$ (BBA)
fold, FSD-EY. Our adaptive workflow enables us to achieve an overall root-mean
squared deviation (RMSD) to the native state of 1.6$~\AA$ and 4.4~$\AA$
respectively for Fs-peptide and FSD-EY. We also highlight some emerging
challenges in the context of designing scalable workflows when data intensive
deep learning techniques are coupled to compute intensive MD simulations.
| [
{
"created": "Thu, 1 Aug 2019 16:45:50 GMT",
"version": "v1"
}
] | 2019-08-02 | [
[
"Ma",
"Heng",
""
],
[
"Bhowmik",
"Debsindhu",
""
],
[
"Lee",
"Hyungro",
""
],
[
"Turilli",
"Matteo",
""
],
[
"Young",
"Michael T.",
""
],
[
"Jha",
"Shantenu",
""
],
[
"Ramanathan",
"Arvind",
""
]
] | Significant progress in computer hardware and software have enabled molecular dynamics (MD) simulations to model complex biological phenomena such as protein folding. However, enabling MD simulations to access biologically relevant timescales (e.g., beyond milliseconds) still remains challenging. These limitations include (1) quantifying which set of states have already been (sufficiently) sampled in an ensemble of MD runs, and (2) identifying novel states from which simulations can be initiated to sample rare events (e.g., sampling folding events). With the recent success of deep learning and artificial intelligence techniques in analyzing large datasets, we posit that these techniques can also be used to adaptively guide MD simulations to model such complex biological phenomena. Leveraging our recently developed unsupervised deep learning technique to cluster protein folding trajectories into partially folded intermediates, we build an iterative workflow that enables our generative model to be coupled with all-atom MD simulations to fold small protein systems on emerging high performance computing platforms. We demonstrate our approach in folding Fs-peptide and the $\beta\beta\alpha$ (BBA) fold, FSD-EY. Our adaptive workflow enables us to achieve an overall root-mean squared deviation (RMSD) to the native state of 1.6$~\AA$ and 4.4~$\AA$ respectively for Fs-peptide and FSD-EY. We also highlight some emerging challenges in the context of designing scalable workflows when data intensive deep learning techniques are coupled to compute intensive MD simulations. |
1809.03078 | Peter Jarvis | Peter D Jarvis and Jeremy G Sumner | Systematics and symmetry in molecular phylogenetic modelling:
perspectives from physics | 51 pages, LaTeX, 3 figures. Minor clarifications added and typos
corrected | null | 10.1088/1751-8121/ab305b | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this review is to present and analyze the probabilistic models of
mathematical phylogenetics which have been intensively used in recent years in
biology as the cornerstone of attempts to infer and reconstruct the ancestral
relationships between species. We outline the development of theoretical
phylogenetics, from the earliest studies based on morphological characters,
through to the use of molecular data in a wide variety of forms. We bring the
lens of mathematical physics to bear on the formulation of theoretical models,
focussing on the applicability of many methods from the toolkit of that
tradition -- techniques of groups and representations to guide model
specification and to exploit the multilinear setting of the models in the
presence of underlying symmetries; extensions to coalgebraic properties of the
generators associated to rate matrices underlying the models, in relation to
the graphical structures (trees and networks) which form the search space for
inferring evolutionary trees. Aspects presented, include relating model classes
to relevant matrix Lie algebras, as well as manipulations with group characters
to enumerate various natural polynomial invariants, for identifying robust,
low-parameter quantities for use in inference. Above all, we wish to emphasize
the many features of multipartite entanglement which are shared between
descriptions of quantum states on the physics side, and the multi-way tensor
probability arrays arising in phylogenetics. In some instances, well-known
objects such as the Cayley hyperdeterminant (the `tangle') can be directly
imported into the formalism -- for models with binary character traits, and
triplets of taxa. In other cases new objects appear, such as the remarkable
quintic `squangle' invariants for quartet tree discrimination and DNA data,
with their own unique interpretation in the phylogenetic modeling context.
| [
{
"created": "Mon, 10 Sep 2018 01:42:38 GMT",
"version": "v1"
},
{
"created": "Sat, 15 Sep 2018 14:42:31 GMT",
"version": "v2"
}
] | 2020-01-08 | [
[
"Jarvis",
"Peter D",
""
],
[
"Sumner",
"Jeremy G",
""
]
] | The aim of this review is to present and analyze the probabilistic models of mathematical phylogenetics which have been intensively used in recent years in biology as the cornerstone of attempts to infer and reconstruct the ancestral relationships between species. We outline the development of theoretical phylogenetics, from the earliest studies based on morphological characters, through to the use of molecular data in a wide variety of forms. We bring the lens of mathematical physics to bear on the formulation of theoretical models, focussing on the applicability of many methods from the toolkit of that tradition -- techniques of groups and representations to guide model specification and to exploit the multilinear setting of the models in the presence of underlying symmetries; extensions to coalgebraic properties of the generators associated to rate matrices underlying the models, in relation to the graphical structures (trees and networks) which form the search space for inferring evolutionary trees. Aspects presented, include relating model classes to relevant matrix Lie algebras, as well as manipulations with group characters to enumerate various natural polynomial invariants, for identifying robust, low-parameter quantities for use in inference. Above all, we wish to emphasize the many features of multipartite entanglement which are shared between descriptions of quantum states on the physics side, and the multi-way tensor probability arrays arising in phylogenetics. In some instances, well-known objects such as the Cayley hyperdeterminant (the `tangle') can be directly imported into the formalism -- for models with binary character traits, and triplets of taxa. In other cases new objects appear, such as the remarkable quintic `squangle' invariants for quartet tree discrimination and DNA data, with their own unique interpretation in the phylogenetic modeling context. |
1608.07440 | Cinzia Di Giusto | Franck Delaplace (IBISC), Cinzia Di Giusto, Jean-Louis Giavitto
(Repmus), Hanna Klaudel (IBISC) | Activity Networks with Delays An application to toxicity analysis | null | null | null | null | q-bio.QM cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | ANDy , Activity Networks with Delays, is a discrete time framework aimed at
the qualitative modelling of time-dependent activities. The modular and concise
syntax makes ANDy suitable for an easy and natural modelling of time-dependent
biological systems (i.e., regulatory pathways). Activities involve entities
playing the role of activators, inhibitors or products of biochemical network
operation. Activities may have given duration, i.e., the time required to
obtain results. An entity may represent an object (e.g., an agent, a
biochemical species or a family of thereof) with a local attribute, a state
denoting its level (e.g., concentration, strength). Entities levels may change
as a result of an activity or may decay gradually as time passes by. The
semantics of ANDy is formally given via high-level Petri nets ensuring this way
some modularity. As main results we show that ANDy systems have finite state
representations even for potentially infinite processes and it well adapts to
the modelling of toxic behaviours. As an illustration, we present a
classification of toxicity properties and give some hints on how they can be
verified with existing tools on ANDy systems. A small case study on blood
glucose regulation is provided to exemplify the ANDy framework and the toxicity
properties.
| [
{
"created": "Fri, 26 Aug 2016 12:41:43 GMT",
"version": "v1"
}
] | 2016-08-29 | [
[
"Delaplace",
"Franck",
"",
"IBISC"
],
[
"Di Giusto",
"Cinzia",
"",
"Repmus"
],
[
"Giavitto",
"Jean-Louis",
"",
"Repmus"
],
[
"Klaudel",
"Hanna",
"",
"IBISC"
]
] | ANDy , Activity Networks with Delays, is a discrete time framework aimed at the qualitative modelling of time-dependent activities. The modular and concise syntax makes ANDy suitable for an easy and natural modelling of time-dependent biological systems (i.e., regulatory pathways). Activities involve entities playing the role of activators, inhibitors or products of biochemical network operation. Activities may have given duration, i.e., the time required to obtain results. An entity may represent an object (e.g., an agent, a biochemical species or a family of thereof) with a local attribute, a state denoting its level (e.g., concentration, strength). Entities levels may change as a result of an activity or may decay gradually as time passes by. The semantics of ANDy is formally given via high-level Petri nets ensuring this way some modularity. As main results we show that ANDy systems have finite state representations even for potentially infinite processes and it well adapts to the modelling of toxic behaviours. As an illustration, we present a classification of toxicity properties and give some hints on how they can be verified with existing tools on ANDy systems. A small case study on blood glucose regulation is provided to exemplify the ANDy framework and the toxicity properties. |
2210.11638 | Gabriel Piva | G. G. Piva, C. Anteneodo | Influence of density-dependent diffusion on pattern formation in a
bounded habitat | 9 pages, 9 figures | null | null | null | q-bio.PE cond-mat.stat-mech nlin.PS | http://creativecommons.org/licenses/by/4.0/ | Considering a nonlocal version of the Fisher-KPP equation, we explore the
impact of heterogeneous diffusion on pattern formation within a bounded region
(refuge). Under homogeneous diffusion, it is established that, nonlocality can
lead to spontaneous pattern formation under certain parameter conditions,
otherwise, when the homogeneous state is stable, spatial perturbations such as
the existence of a refuge (high quality region within a hostile environment)
can induce patterns. To understand how density-dependent diffusivity influences
these forms of pattern formation, we examine diffusivity reactions to both
rarefaction or overcrowding. Additionally, for comparison, we investigate the
scenario where diffusivity levels vary spatially, inside and outside the
refuge. We find that state-dependent diffusivity affects the shape and
stability of patterns, potentially triggering either explosive growth or
fragmentation of the population distribution, depending on how diffusion
responses to density changes.
| [
{
"created": "Thu, 20 Oct 2022 23:49:08 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Jul 2024 15:24:07 GMT",
"version": "v2"
}
] | 2024-07-04 | [
[
"Piva",
"G. G.",
""
],
[
"Anteneodo",
"C.",
""
]
] | Considering a nonlocal version of the Fisher-KPP equation, we explore the impact of heterogeneous diffusion on pattern formation within a bounded region (refuge). Under homogeneous diffusion, it is established that, nonlocality can lead to spontaneous pattern formation under certain parameter conditions, otherwise, when the homogeneous state is stable, spatial perturbations such as the existence of a refuge (high quality region within a hostile environment) can induce patterns. To understand how density-dependent diffusivity influences these forms of pattern formation, we examine diffusivity reactions to both rarefaction or overcrowding. Additionally, for comparison, we investigate the scenario where diffusivity levels vary spatially, inside and outside the refuge. We find that state-dependent diffusivity affects the shape and stability of patterns, potentially triggering either explosive growth or fragmentation of the population distribution, depending on how diffusion responses to density changes. |
1411.4242 | Najmeh Sadat Mirian | Najmeh Sadat Mirian | Colored Correlated Noises in Growth Model of Tumor | It is not complete and I did not find time to finish it | null | null | null | q-bio.CB cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stochastic resonance induced by external factor is considering to investigate
the complex dynamics of tumor. The surrounding environment and the treatment
effects on the tumor growth are considered as additive and multiplicative
noises in growth model. The adaptability of tumor to treatment is presented by
correlation of these two noises. The Fokker-Plank equation is deduced to study
the probability distribution function and mean number of tumor cells in
different conditions. The mean number of tumor cells can be controlled by the
correlation and intensity of noises.
| [
{
"created": "Sun, 16 Nov 2014 10:52:47 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Nov 2014 14:18:59 GMT",
"version": "v2"
},
{
"created": "Sat, 15 Jul 2017 10:41:17 GMT",
"version": "v3"
}
] | 2017-07-18 | [
[
"Mirian",
"Najmeh Sadat",
""
]
] | Stochastic resonance induced by external factor is considering to investigate the complex dynamics of tumor. The surrounding environment and the treatment effects on the tumor growth are considered as additive and multiplicative noises in growth model. The adaptability of tumor to treatment is presented by correlation of these two noises. The Fokker-Plank equation is deduced to study the probability distribution function and mean number of tumor cells in different conditions. The mean number of tumor cells can be controlled by the correlation and intensity of noises. |
1305.1267 | Liao Chen | Liao Y. Chen | Healthy sweet inhibitor of Plasmodium falciparum aquaglyceroporin | 24 pages, 8 figures | Biophysical Chemistry 198, 14-21 (2015) | 10.1016/j.bpc.2015.01.004 | null | q-bio.SC physics.bio-ph q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Plasmodium falciparum aquaglyceroporin (PfAQP) is a multifunctional channel
protein in the plasma membrane of the malarial parasite that causes the most
severe form of malaria infecting more than a million people a year. Finding a
novel way to inhibit PfAQP, I conducted 3+ microseconds in silico experiments
of an atomistic model of the PfAQP-membrane system and computed the
chemical-potential profiles of six permeants (erythritol, water, glycerol,
urea, ammonia, and ammonium) that can be efficiently transported across P.
falciparum's plasma membrane through PfAQP's conducting pore. The profiles show
that, with all the existent in vitro data being supportive, erythritol, a
permeant of PfAQP itself having a deep ditch in its permeation passageway,
strongly inhibits PfAQP's functions of transporting water, glycerol, urea,
ammonia, and ammonium (The IC50 is in the range of high nanomolars). This
suggests the possibility that erythritol, a sweetener generally considered
safe, may be the drug needed to kill the malarial parasite in vivo without
causing serious side effects.
| [
{
"created": "Mon, 6 May 2013 18:14:30 GMT",
"version": "v1"
},
{
"created": "Fri, 17 May 2013 21:35:35 GMT",
"version": "v2"
}
] | 2015-02-17 | [
[
"Chen",
"Liao Y.",
""
]
] | Plasmodium falciparum aquaglyceroporin (PfAQP) is a multifunctional channel protein in the plasma membrane of the malarial parasite that causes the most severe form of malaria infecting more than a million people a year. Finding a novel way to inhibit PfAQP, I conducted 3+ microseconds in silico experiments of an atomistic model of the PfAQP-membrane system and computed the chemical-potential profiles of six permeants (erythritol, water, glycerol, urea, ammonia, and ammonium) that can be efficiently transported across P. falciparum's plasma membrane through PfAQP's conducting pore. The profiles show that, with all the existent in vitro data being supportive, erythritol, a permeant of PfAQP itself having a deep ditch in its permeation passageway, strongly inhibits PfAQP's functions of transporting water, glycerol, urea, ammonia, and ammonium (The IC50 is in the range of high nanomolars). This suggests the possibility that erythritol, a sweetener generally considered safe, may be the drug needed to kill the malarial parasite in vivo without causing serious side effects. |
1307.7840 | Aaron Darling | Jo\~ao Paulo Pereira Zanetti, Priscila Biller, and Jo\~ao Meidanis | On the Matrix Median Problem | Peer-reviewed and presented as part of the 13th Workshop on
Algorithms in Bioinformatics (WABI2013) | null | null | null | q-bio.QM cs.CE cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Genome Median Problem is an important problem in phylogenetic
reconstruction under rearrangement models. It can be stated as follows: given
three genomes, find a fourth that minimizes the sum of the pairwise
rearrangement distances between it and the three input genomes. Recently,
Feijao and Meidanis extended the algebraic theory for genome rearrangement to
allow for linear chromosomes, thus yielding a new rearrangement model (the
algebraic model), very close to the celebrated DCJ model. In this paper, we
study the genome median problem under the algebraic model, whose complexity is
currently open, proposing a more general form of the problem, the matrix median
problem. It is known that, for any metric distance, at least one of the corners
is a 4/3-approximation of the median. Our results allow us to compute up to
three additional matrix median candidates, all of them with approximation
ratios at least as good as the best corner, when the input matrices come from
genomes. From the application point of view, it is usually more interesting to
locate medians farther from the corners. We also show a fourth median candidate
that gives better results in cases we tried. However, we do not have proven
bounds for this fourth candidate yet.
| [
{
"created": "Tue, 30 Jul 2013 06:52:13 GMT",
"version": "v1"
}
] | 2013-08-02 | [
[
"Zanetti",
"João Paulo Pereira",
""
],
[
"Biller",
"Priscila",
""
],
[
"Meidanis",
"João",
""
]
] | The Genome Median Problem is an important problem in phylogenetic reconstruction under rearrangement models. It can be stated as follows: given three genomes, find a fourth that minimizes the sum of the pairwise rearrangement distances between it and the three input genomes. Recently, Feijao and Meidanis extended the algebraic theory for genome rearrangement to allow for linear chromosomes, thus yielding a new rearrangement model (the algebraic model), very close to the celebrated DCJ model. In this paper, we study the genome median problem under the algebraic model, whose complexity is currently open, proposing a more general form of the problem, the matrix median problem. It is known that, for any metric distance, at least one of the corners is a 4/3-approximation of the median. Our results allow us to compute up to three additional matrix median candidates, all of them with approximation ratios at least as good as the best corner, when the input matrices come from genomes. From the application point of view, it is usually more interesting to locate medians farther from the corners. We also show a fourth median candidate that gives better results in cases we tried. However, we do not have proven bounds for this fourth candidate yet. |
q-bio/0406020 | Igor Volkov | Igor Volkov, Jayanth R. Banavar and Amos Maritan | Organization of Ecosystems in the Vicinity of a Novel Phase Transition | 4 pages, 2 figures | Phys. Rev. Lett. 92, 218703 (2004) | 10.1103/PhysRevLett.92.218703 | null | q-bio.PE cond-mat.stat-mech physics.bio-ph | null | It is shown that an ecosystem in equilibrium is generally organized in a
state which is poised in the vicinity of a novel phase transition.
| [
{
"created": "Wed, 9 Jun 2004 17:05:21 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Volkov",
"Igor",
""
],
[
"Banavar",
"Jayanth R.",
""
],
[
"Maritan",
"Amos",
""
]
] | It is shown that an ecosystem in equilibrium is generally organized in a state which is poised in the vicinity of a novel phase transition. |
2110.02935 | Breno Ferraz de Oliveira | P.P. Avelino, B.F. de Oliveira and R.S. Trintin | Lotka-Volterra versus May-Leonard formulations of the spatial stochastic
Rock-Paper-Scissors model: the missing link | 6 pages, 4 figures | null | 10.1103/PhysRevE.105.024309 | null | q-bio.PE cond-mat.stat-mech physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | The Rock-Paper-Scissors (RPS) model successfully reproduces some of the main
features of simple cyclic predator-prey systems with interspecific competition
observed in nature. Still, lattice-based simulations of the spatial stochastic
RPS model are known to give rise to significantly different results, depending
on whether the three state Lotka-Volterra or the four state May-Leonard
formulation is employed. This is true independently of the values of the model
parameters and of the use of either a von Neumann or a Moore neighborhood. With
the objective of reducing the impact of the use of a discrete lattice, in this
paper we introduce a simple modification to the standard spatial stochastic RPS
model in which the range of the search of the nearest neighbor may be extended
up to a maximum euclidean radius $R$. We show that, with this adjustment, the
Lotka-Volterra and May-Leonard formulations can be designed to produce similar
results, both in terms of dynamical properties and spatial features, by means
of an appropriate parameter choice. In particular, we show that this modified
spatial stochastic RPS model naturally leads to the emergence of spiral
patterns in both its three and four state formulations.
| [
{
"created": "Wed, 6 Oct 2021 17:30:28 GMT",
"version": "v1"
}
] | 2022-03-14 | [
[
"Avelino",
"P. P.",
""
],
[
"de Oliveira",
"B. F.",
""
],
[
"Trintin",
"R. S.",
""
]
] | The Rock-Paper-Scissors (RPS) model successfully reproduces some of the main features of simple cyclic predator-prey systems with interspecific competition observed in nature. Still, lattice-based simulations of the spatial stochastic RPS model are known to give rise to significantly different results, depending on whether the three state Lotka-Volterra or the four state May-Leonard formulation is employed. This is true independently of the values of the model parameters and of the use of either a von Neumann or a Moore neighborhood. With the objective of reducing the impact of the use of a discrete lattice, in this paper we introduce a simple modification to the standard spatial stochastic RPS model in which the range of the search of the nearest neighbor may be extended up to a maximum euclidean radius $R$. We show that, with this adjustment, the Lotka-Volterra and May-Leonard formulations can be designed to produce similar results, both in terms of dynamical properties and spatial features, by means of an appropriate parameter choice. In particular, we show that this modified spatial stochastic RPS model naturally leads to the emergence of spiral patterns in both its three and four state formulations. |
2205.15019 | Tudor Achim | Namrata Anand, Tudor Achim | Protein Structure and Sequence Generation with Equivariant Denoising
Diffusion Probabilistic Models | null | null | null | null | q-bio.QM cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Proteins are macromolecules that mediate a significant fraction of the
cellular processes that underlie life. An important task in bioengineering is
designing proteins with specific 3D structures and chemical properties which
enable targeted functions. To this end, we introduce a generative model of both
protein structure and sequence that can operate at significantly larger scales
than previous molecular generative modeling approaches. The model is learned
entirely from experimental data and conditions its generation on a compact
specification of protein topology to produce a full-atom backbone configuration
as well as sequence and side-chain predictions. We demonstrate the quality of
the model via qualitative and quantitative analysis of its samples. Videos of
sampling trajectories are available at https://nanand2.github.io/proteins .
| [
{
"created": "Thu, 26 May 2022 16:10:09 GMT",
"version": "v1"
}
] | 2022-05-31 | [
[
"Anand",
"Namrata",
""
],
[
"Achim",
"Tudor",
""
]
] | Proteins are macromolecules that mediate a significant fraction of the cellular processes that underlie life. An important task in bioengineering is designing proteins with specific 3D structures and chemical properties which enable targeted functions. To this end, we introduce a generative model of both protein structure and sequence that can operate at significantly larger scales than previous molecular generative modeling approaches. The model is learned entirely from experimental data and conditions its generation on a compact specification of protein topology to produce a full-atom backbone configuration as well as sequence and side-chain predictions. We demonstrate the quality of the model via qualitative and quantitative analysis of its samples. Videos of sampling trajectories are available at https://nanand2.github.io/proteins . |
1901.01024 | Peter Taylor | Yujiang Wang, Gabrielle Marie Schroeder, Nishant Sinha, Peter Neal
Taylor | Personalised network modelling in epilepsy | 18 pages, 1 figure, book chapter | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Epilepsy is a disorder characterised by spontaneous, recurrent seizures. Both
local and network abnormalities have been associated with epilepsy, and the
exact processes generating seizures are thought to be heterogeneous and
patient-specific. Due to the heterogeneity, treatments such as surgery and
medication are not always effective in achieving full seizure control and
choosing the best treatment for the individual patient can be challenging.
Predictive models constrained by the patient's own data therefore offer the
potential to assist in clinical decision making. In this chapter, we describe
how personalised patient-derived networks from structural or functional
connectivity can be incorporated into predictive models. We focus specifically
on dynamical systems models which are composed of differential equations
capable of simulating brain activity over time. Here we review recent studies
which have used these models, constrained by patient data, to make personalised
patient-specific predictions about seizure features (such as propagation
patterns) or treatment outcomes (such as the success of surgical resection).
Finally, we suggest future research directions for patient-specific network
models in epilepsy, including their application to integrate information from
multiple modalities, to predict long-term disease evolution, and to account for
within-subject variability for treatment.
| [
{
"created": "Fri, 4 Jan 2019 09:04:55 GMT",
"version": "v1"
}
] | 2019-01-07 | [
[
"Wang",
"Yujiang",
""
],
[
"Schroeder",
"Gabrielle Marie",
""
],
[
"Sinha",
"Nishant",
""
],
[
"Taylor",
"Peter Neal",
""
]
] | Epilepsy is a disorder characterised by spontaneous, recurrent seizures. Both local and network abnormalities have been associated with epilepsy, and the exact processes generating seizures are thought to be heterogeneous and patient-specific. Due to the heterogeneity, treatments such as surgery and medication are not always effective in achieving full seizure control and choosing the best treatment for the individual patient can be challenging. Predictive models constrained by the patient's own data therefore offer the potential to assist in clinical decision making. In this chapter, we describe how personalised patient-derived networks from structural or functional connectivity can be incorporated into predictive models. We focus specifically on dynamical systems models which are composed of differential equations capable of simulating brain activity over time. Here we review recent studies which have used these models, constrained by patient data, to make personalised patient-specific predictions about seizure features (such as propagation patterns) or treatment outcomes (such as the success of surgical resection). Finally, we suggest future research directions for patient-specific network models in epilepsy, including their application to integrate information from multiple modalities, to predict long-term disease evolution, and to account for within-subject variability for treatment. |
2101.00405 | Michele Garetto | Michele Garetto and Emilio Leonardi and Giovanni Luca Torrisi | A time-modulated Hawkes process to model the spread of COVID-19 and the
impact of countermeasures | 13 colored figures | Annual Reviews in Control, 2021 | 10.1016/j.arcontrol.2021.02.002 | null | q-bio.PE math.PR physics.soc-ph stat.AP | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Motivated by the recent outbreak of coronavirus (COVID-19), we propose a
stochastic model of epidemic temporal growth and mitigation based on a
time-modulated Hawkes process. The model is sufficiently rich to incorporate
specific characteristics of the novel coronavirus, to capture the impact of
undetected, asymptomatic and super-diffusive individuals, and especially to
take into account time-varying counter-measures and detection efforts. Yet, it
is simple enough to allow scalable and efficient computation of the temporal
evolution of the epidemic, and exploration of what-if scenarios. Compared to
traditional compartmental models, our approach allows a more faithful
description of virus specific features, such as distributions for the time
spent in stages, which is crucial when the time-scale of control (e.g.,
mobility restrictions) is comparable to the lifetime of a single infection. We
apply the model to the first and second wave of COVID-19 in Italy, shedding
light into several effects related to mobility restrictions introduced by the
government, and to the effectiveness of contact tracing and mass testing
performed by the national health service.
| [
{
"created": "Sat, 2 Jan 2021 08:53:32 GMT",
"version": "v1"
}
] | 2021-03-18 | [
[
"Garetto",
"Michele",
""
],
[
"Leonardi",
"Emilio",
""
],
[
"Torrisi",
"Giovanni Luca",
""
]
] | Motivated by the recent outbreak of coronavirus (COVID-19), we propose a stochastic model of epidemic temporal growth and mitigation based on a time-modulated Hawkes process. The model is sufficiently rich to incorporate specific characteristics of the novel coronavirus, to capture the impact of undetected, asymptomatic and super-diffusive individuals, and especially to take into account time-varying counter-measures and detection efforts. Yet, it is simple enough to allow scalable and efficient computation of the temporal evolution of the epidemic, and exploration of what-if scenarios. Compared to traditional compartmental models, our approach allows a more faithful description of virus specific features, such as distributions for the time spent in stages, which is crucial when the time-scale of control (e.g., mobility restrictions) is comparable to the lifetime of a single infection. We apply the model to the first and second wave of COVID-19 in Italy, shedding light into several effects related to mobility restrictions introduced by the government, and to the effectiveness of contact tracing and mass testing performed by the national health service. |
2103.01014 | Maria Farahi | Maria Farahi, Alicia Casals, Omid Sarrafzadeh, Yasaman Zamani, Hooran
Ahmadi, Naeimeh Behbood, Hessam Habibian | Beat-to-Beat Fetal Heart Rate Analysis Using Portable Medical Device and
Wavelet Transformation Technique | 12 pages, 12 figures | Heliyon 8(12), E12655, 2022 | 10.1016/j.heliyon.2022.e12655 | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | A beat-to-beat Tele-fetal Monitoring and comparison with clinical data are
studied with a wavelet transformation approach. Tele-fetal monitoring is a big
progress toward a wearable medical device for a pregnant woman capable of
obtaining prenatal care at home. We apply a wavelet transformation algorithm
for fetal cardiac monitoring using a portable fetal Doppler medical device.
Choosing an appropriate mother wavelet, 85 different mother wavelets are
investigated. The efficiency of the proposed method is evaluated using two data
sets including public and clinical. From publicly available data on PhysioBank,
and simultaneous clinical measurement, we prove that the comparison between
obtained fetal heart rate by the algorithm and the baselines yields a promising
accuracy beyond 95%. Finally, we conclude that the proposed algorithm would be
a robust technique for any similar tele-fetal monitoring approach.
| [
{
"created": "Mon, 1 Mar 2021 14:02:24 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Mar 2021 10:43:19 GMT",
"version": "v2"
},
{
"created": "Thu, 4 Nov 2021 16:38:31 GMT",
"version": "v3"
},
{
"created": "Mon, 27 Jun 2022 11:49:49 GMT",
"version": "v4"
}
] | 2023-01-24 | [
[
"Farahi",
"Maria",
""
],
[
"Casals",
"Alicia",
""
],
[
"Sarrafzadeh",
"Omid",
""
],
[
"Zamani",
"Yasaman",
""
],
[
"Ahmadi",
"Hooran",
""
],
[
"Behbood",
"Naeimeh",
""
],
[
"Habibian",
"Hessam",
""
]
] | A beat-to-beat Tele-fetal Monitoring and comparison with clinical data are studied with a wavelet transformation approach. Tele-fetal monitoring is a big progress toward a wearable medical device for a pregnant woman capable of obtaining prenatal care at home. We apply a wavelet transformation algorithm for fetal cardiac monitoring using a portable fetal Doppler medical device. Choosing an appropriate mother wavelet, 85 different mother wavelets are investigated. The efficiency of the proposed method is evaluated using two data sets including public and clinical. From publicly available data on PhysioBank, and simultaneous clinical measurement, we prove that the comparison between obtained fetal heart rate by the algorithm and the baselines yields a promising accuracy beyond 95%. Finally, we conclude that the proposed algorithm would be a robust technique for any similar tele-fetal monitoring approach. |
1210.4679 | Vitor Hugo Patricio Louzada | Vitor H. P. Louzada, Fabr\'icio M. Lopes, Ronaldo F. Hashimoto | A Monte Carlo Approach to Measure the Robustness of Boolean Networks | on 1st International Workshop on Robustness and Stability of
Biological Systems and Computational Solutions (WRSBS) | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Emergence of robustness in biological networks is a paramount feature of
evolving organisms, but a study of this property in vivo, for any level of
representation such as Genetic, Metabolic, or Neuronal Networks, is a very hard
challenge. In the case of Genetic Networks, mathematical models have been used
in this context to provide insights on their robustness, but even in relatively
simple formulations, such as Boolean Networks (BN), it might not be feasible to
compute some measures for large system sizes. We describe in this work a Monte
Carlo approach to calculate the size of the largest basin of attraction of a
BN, which is intrinsically associated with its robustness, that can be used
regardless the network size. We show the stability of our method through
finite-size analysis and validate it with a full search on small networks.
| [
{
"created": "Wed, 17 Oct 2012 09:17:06 GMT",
"version": "v1"
}
] | 2012-10-18 | [
[
"Louzada",
"Vitor H. P.",
""
],
[
"Lopes",
"Fabrício M.",
""
],
[
"Hashimoto",
"Ronaldo F.",
""
]
] | Emergence of robustness in biological networks is a paramount feature of evolving organisms, but a study of this property in vivo, for any level of representation such as Genetic, Metabolic, or Neuronal Networks, is a very hard challenge. In the case of Genetic Networks, mathematical models have been used in this context to provide insights on their robustness, but even in relatively simple formulations, such as Boolean Networks (BN), it might not be feasible to compute some measures for large system sizes. We describe in this work a Monte Carlo approach to calculate the size of the largest basin of attraction of a BN, which is intrinsically associated with its robustness, that can be used regardless the network size. We show the stability of our method through finite-size analysis and validate it with a full search on small networks. |
1810.03044 | Casey Bennett | Casey C. Bennett | Artificial Intelligence for Diabetes Case Management: The Intersection
of Physical and Mental Health | arXiv admin note: This version has been removed by arXiv
administrators due to copyright infringement | Informatics in Medicine Unlocked, 2019 | 10.1016/j.imu.2019.100191 | null | q-bio.QM cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diabetes is a major public health problem in the United States, affecting
roughly 30 million people. Diabetes complications, along with the mental health
comorbidities that often co-occur with them, are major drivers of high
healthcare costs, poor outcomes, and reduced treatment adherence in diabetes.
Here, we evaluate in a large state-wide population whether we can use
artificial intelligence (AI) techniques to identify clusters of patient
trajectories within the broader diabetes population in order to create
cost-effective, narrowly-focused case management intervention strategies to
reduce development of complications. This approach combined data from: 1)
claims, 2) case management notes, and 3) social determinants of health from
~300,000 real patients between 2014 and 2016. We categorized complications as
five types: Cardiovascular, Neuropathy, Opthalmic, Renal, and Other. Modeling
was performed combining a variety of machine learning algorithms, including
supervised classification, unsupervised clustering, natural language processing
of unstructured care notes, and feature engineering. The results showed that we
can predict development of diabetes complications roughly 83.5% of the time
using claims data or social determinants of health data. They also showed we
can reveal meaningful clusters in the patient population related to
complications and mental health that can be used to cost-effective screening
program, reducing the number of patients to be screened down by 85%. This study
outlines creation of an AI framework to develop protocols to better address
mental health comorbidities that lead to complications development in the
diabetes population. Future work is described that outlines potential lines of
research and the need for better addressing the 'people side' of the equation.
| [
{
"created": "Sat, 6 Oct 2018 19:59:56 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Mar 2019 19:12:44 GMT",
"version": "v2"
},
{
"created": "Fri, 10 May 2019 18:59:06 GMT",
"version": "v3"
}
] | 2019-09-19 | [
[
"Bennett",
"Casey C.",
""
]
] | Diabetes is a major public health problem in the United States, affecting roughly 30 million people. Diabetes complications, along with the mental health comorbidities that often co-occur with them, are major drivers of high healthcare costs, poor outcomes, and reduced treatment adherence in diabetes. Here, we evaluate in a large state-wide population whether we can use artificial intelligence (AI) techniques to identify clusters of patient trajectories within the broader diabetes population in order to create cost-effective, narrowly-focused case management intervention strategies to reduce development of complications. This approach combined data from: 1) claims, 2) case management notes, and 3) social determinants of health from ~300,000 real patients between 2014 and 2016. We categorized complications as five types: Cardiovascular, Neuropathy, Opthalmic, Renal, and Other. Modeling was performed combining a variety of machine learning algorithms, including supervised classification, unsupervised clustering, natural language processing of unstructured care notes, and feature engineering. The results showed that we can predict development of diabetes complications roughly 83.5% of the time using claims data or social determinants of health data. They also showed we can reveal meaningful clusters in the patient population related to complications and mental health that can be used to cost-effective screening program, reducing the number of patients to be screened down by 85%. This study outlines creation of an AI framework to develop protocols to better address mental health comorbidities that lead to complications development in the diabetes population. Future work is described that outlines potential lines of research and the need for better addressing the 'people side' of the equation. |
1806.04122 | Marc Howard | Marc W. Howard, Andre Luzardo, and Zoran Tiganj | Evidence accumulation in a Laplace domain decision space | Revised for CBB | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evidence accumulation models of simple decision-making have long assumed that
the brain estimates a scalar decision variable corresponding to the
log-likelihood ratio of the two alternatives. Typical neural implementations of
this algorithmic cognitive model assume that large numbers of neurons are each
noisy exemplars of the scalar decision variable. Here we propose a neural
implementation of the diffusion model in which many neurons construct and
maintain the Laplace transform of the distance to each of the decision bounds.
As in classic findings from brain regions including LIP, the firing rate of
neurons coding for the Laplace transform of net accumulated evidence grows to a
bound during random dot motion tasks. However, rather than noisy exemplars of a
single mean value, this approach makes the novel prediction that firing rates
grow to the bound exponentially, across neurons there should be a distribution
of different rates. A second set of neurons records an approximate inversion of
the Laplace transform, these neurons directly estimate net accumulated
evidence. In analogy to time cells and place cells observed in the hippocampus
and other brain regions, the neurons in this second set have receptive fields
along a "decision axis." This finding is consistent with recent findings from
rodent recordings. This theoretical approach places simple evidence
accumulation models in the same mathematical language as recent proposals for
representing time and space in cognitive models for memory.
| [
{
"created": "Mon, 11 Jun 2018 17:43:56 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Oct 2018 03:11:12 GMT",
"version": "v2"
}
] | 2018-10-24 | [
[
"Howard",
"Marc W.",
""
],
[
"Luzardo",
"Andre",
""
],
[
"Tiganj",
"Zoran",
""
]
] | Evidence accumulation models of simple decision-making have long assumed that the brain estimates a scalar decision variable corresponding to the log-likelihood ratio of the two alternatives. Typical neural implementations of this algorithmic cognitive model assume that large numbers of neurons are each noisy exemplars of the scalar decision variable. Here we propose a neural implementation of the diffusion model in which many neurons construct and maintain the Laplace transform of the distance to each of the decision bounds. As in classic findings from brain regions including LIP, the firing rate of neurons coding for the Laplace transform of net accumulated evidence grows to a bound during random dot motion tasks. However, rather than noisy exemplars of a single mean value, this approach makes the novel prediction that firing rates grow to the bound exponentially, across neurons there should be a distribution of different rates. A second set of neurons records an approximate inversion of the Laplace transform, these neurons directly estimate net accumulated evidence. In analogy to time cells and place cells observed in the hippocampus and other brain regions, the neurons in this second set have receptive fields along a "decision axis." This finding is consistent with recent findings from rodent recordings. This theoretical approach places simple evidence accumulation models in the same mathematical language as recent proposals for representing time and space in cognitive models for memory. |
2302.03137 | Samuel Kim | Soojin Lee, Ingu Sean Lee, Samuel Kim | Predicting Development of Chronic Obstructive Pulmonary Disease and its
Risk Factor Analysis | submitted to EMBC 2023 | null | null | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by/4.0/ | Chronic Obstructive Pulmonary Disease (COPD) is an irreversible airway
obstruction with a high societal burden. Although smoking is known to be the
biggest risk factor, additional components need to be considered. In this
study, we aim to identify COPD risk factors by applying machine learning models
that integrate sociodemographic, clinical, and genetic data to predict COPD
development.
| [
{
"created": "Mon, 6 Feb 2023 21:50:34 GMT",
"version": "v1"
}
] | 2023-02-08 | [
[
"Lee",
"Soojin",
""
],
[
"Lee",
"Ingu Sean",
""
],
[
"Kim",
"Samuel",
""
]
] | Chronic Obstructive Pulmonary Disease (COPD) is an irreversible airway obstruction with a high societal burden. Although smoking is known to be the biggest risk factor, additional components need to be considered. In this study, we aim to identify COPD risk factors by applying machine learning models that integrate sociodemographic, clinical, and genetic data to predict COPD development. |
1811.11258 | Ilya Timofeyev | Sergey S. Sarkisov and Ilya Timofeyev and Robert Azencott | Fitness Estimation for Genetic Evolution of Bacterial Populations | null | null | null | null | q-bio.PE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we develop and test algorithmic techniques to estimate
genotypes fitnesses by analysis of observed daily frequency data monitoring the
long-term evolution of bacterial populations. In particular, we develop a
non-linear least squares approach to estimate selective advantages of emerging
new mutant strains in locked-box stochastic models describing bacterial genetic
evolution similar to the celebrated Lenski experiment on Escherichia Coli. Our
algorithm first analyses emergence of new mutant strains for each individual
trajectory. For each trajectory our analysis is progressive in time, and
successively focuses on the first mutation event before analyzing the second
mutation event. The basic principle applied here is to minimize (for each
trajectory) the mean squared errors of prediction w(t) - W(t) where the
observed white cell frequencies w(t) are predicted by W(t), which is computed
as the conditional expectation of w(t) given the available information at time
(t-1). The pooling of all selective advantages estimates across all
trajectories provides histograms on which we perform a precise peak analysis to
compute final estimates of selective advantages. We validate our approach using
ensembles of simulated trajectories.
| [
{
"created": "Tue, 27 Nov 2018 21:03:18 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Oct 2020 16:45:00 GMT",
"version": "v2"
}
] | 2020-10-05 | [
[
"Sarkisov",
"Sergey S.",
""
],
[
"Timofeyev",
"Ilya",
""
],
[
"Azencott",
"Robert",
""
]
] | In this paper we develop and test algorithmic techniques to estimate genotypes fitnesses by analysis of observed daily frequency data monitoring the long-term evolution of bacterial populations. In particular, we develop a non-linear least squares approach to estimate selective advantages of emerging new mutant strains in locked-box stochastic models describing bacterial genetic evolution similar to the celebrated Lenski experiment on Escherichia Coli. Our algorithm first analyses emergence of new mutant strains for each individual trajectory. For each trajectory our analysis is progressive in time, and successively focuses on the first mutation event before analyzing the second mutation event. The basic principle applied here is to minimize (for each trajectory) the mean squared errors of prediction w(t) - W(t) where the observed white cell frequencies w(t) are predicted by W(t), which is computed as the conditional expectation of w(t) given the available information at time (t-1). The pooling of all selective advantages estimates across all trajectories provides histograms on which we perform a precise peak analysis to compute final estimates of selective advantages. We validate our approach using ensembles of simulated trajectories. |
2211.04468 | Aryan Pedawi | Aryan Pedawi, Pawel Gniewek, Chaoyi Chang, Brandon M. Anderson, Henry
van den Bedem | An efficient graph generative model for navigating ultra-large
combinatorial synthesis libraries | 36th Conference on Neural Information Processing Systems (NeurIPS
2022) | null | null | null | q-bio.QM cs.LG cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Virtual, make-on-demand chemical libraries have transformed early-stage drug
discovery by unlocking vast, synthetically accessible regions of chemical
space. Recent years have witnessed rapid growth in these libraries from
millions to trillions of compounds, hiding undiscovered, potent hits for a
variety of therapeutic targets. However, they are quickly approaching a size
beyond that which permits explicit enumeration, presenting new challenges for
virtual screening. To overcome these challenges, we propose the Combinatorial
Synthesis Library Variational Auto-Encoder (CSLVAE). The proposed generative
model represents such libraries as a differentiable, hierarchically-organized
database. Given a compound from the library, the molecular encoder constructs a
query for retrieval, which is utilized by the molecular decoder to reconstruct
the compound by first decoding its chemical reaction and subsequently decoding
its reactants. Our design minimizes autoregression in the decoder, facilitating
the generation of large, valid molecular graphs. Our method performs fast and
parallel batch inference for ultra-large synthesis libraries, enabling a number
of important applications in early-stage drug discovery. Compounds proposed by
our method are guaranteed to be in the library, and thus synthetically and
cost-effectively accessible. Importantly, CSLVAE can encode out-of-library
compounds and search for in-library analogues. In experiments, we demonstrate
the capabilities of the proposed method in the navigation of massive
combinatorial synthesis libraries.
| [
{
"created": "Wed, 19 Oct 2022 15:43:13 GMT",
"version": "v1"
}
] | 2022-11-10 | [
[
"Pedawi",
"Aryan",
""
],
[
"Gniewek",
"Pawel",
""
],
[
"Chang",
"Chaoyi",
""
],
[
"Anderson",
"Brandon M.",
""
],
[
"Bedem",
"Henry van den",
""
]
] | Virtual, make-on-demand chemical libraries have transformed early-stage drug discovery by unlocking vast, synthetically accessible regions of chemical space. Recent years have witnessed rapid growth in these libraries from millions to trillions of compounds, hiding undiscovered, potent hits for a variety of therapeutic targets. However, they are quickly approaching a size beyond that which permits explicit enumeration, presenting new challenges for virtual screening. To overcome these challenges, we propose the Combinatorial Synthesis Library Variational Auto-Encoder (CSLVAE). The proposed generative model represents such libraries as a differentiable, hierarchically-organized database. Given a compound from the library, the molecular encoder constructs a query for retrieval, which is utilized by the molecular decoder to reconstruct the compound by first decoding its chemical reaction and subsequently decoding its reactants. Our design minimizes autoregression in the decoder, facilitating the generation of large, valid molecular graphs. Our method performs fast and parallel batch inference for ultra-large synthesis libraries, enabling a number of important applications in early-stage drug discovery. Compounds proposed by our method are guaranteed to be in the library, and thus synthetically and cost-effectively accessible. Importantly, CSLVAE can encode out-of-library compounds and search for in-library analogues. In experiments, we demonstrate the capabilities of the proposed method in the navigation of massive combinatorial synthesis libraries. |
0902.3654 | Helmut Kroger | Reza Zomorrodi, Helmut Kroger, Igor Timofeev | Modeling thalamocortical cell: impact of Ca2+ channel distribution and
cell geometry on firing pattern | null | Frontiers Comput. Neurosci, Dec. 2008, Vol.2, Article 5 | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The influence of calcium channel distribution and geometry of the
thalamocortical cell upon its tonic firing and the low threshold spike (LTS)
generation was studied in a 3-compartment model, which represents soma,
proximal and distal dendrites as well as in multi-compartment model using the
morphology of a real reconstructed neuron. Using an uniform distribution of
Ca2+ channels, we determined the minimal number of low threshold
voltage-activated calcium channels and their permeability required for the
onset of LTS in response to a hyperpolarizing current pulse. In the
3-compartment model, we found that the channel distribution influences the
firing pattern only in the range of 3% below the threshold value of total
T-channel density. In the multi-compartmental model, the LTS could be generated
by only 64% of unequally distributed T-channels compared to the minimal number
of equally distributed T-channels. For a given channel density and injected
current, the tonic firing frequency was found to be inversely proportional to
the size of the cell. However, when the Ca2+ channel density was elevated in
soma or proximal dendrites, then the amplitude of LTS response and burst spike
frequencies were determined by the ratio of total to threshold number of
T-channels in the cell for a specific geometry.
| [
{
"created": "Fri, 20 Feb 2009 20:43:01 GMT",
"version": "v1"
}
] | 2009-02-23 | [
[
"Zomorrodi",
"Reza",
""
],
[
"Kroger",
"Helmut",
""
],
[
"Timofeev",
"Igor",
""
]
] | The influence of calcium channel distribution and geometry of the thalamocortical cell upon its tonic firing and the low threshold spike (LTS) generation was studied in a 3-compartment model, which represents soma, proximal and distal dendrites as well as in multi-compartment model using the morphology of a real reconstructed neuron. Using an uniform distribution of Ca2+ channels, we determined the minimal number of low threshold voltage-activated calcium channels and their permeability required for the onset of LTS in response to a hyperpolarizing current pulse. In the 3-compartment model, we found that the channel distribution influences the firing pattern only in the range of 3% below the threshold value of total T-channel density. In the multi-compartmental model, the LTS could be generated by only 64% of unequally distributed T-channels compared to the minimal number of equally distributed T-channels. For a given channel density and injected current, the tonic firing frequency was found to be inversely proportional to the size of the cell. However, when the Ca2+ channel density was elevated in soma or proximal dendrites, then the amplitude of LTS response and burst spike frequencies were determined by the ratio of total to threshold number of T-channels in the cell for a specific geometry. |
2404.11143 | Navve Wasserman | Navve Wasserman, Roman Beliy, Roy Urbach, and Michal Irani | Functional Brain-to-Brain Transformation with No Shared Data | 15 pages, 7 figures, 1 table. In review | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Combining Functional MRI (fMRI) data across different subjects and datasets
is crucial for many neuroscience tasks. Relying solely on shared anatomy for
brain-to-brain mapping is inadequate. Existing functional transformation
methods thus depend on shared stimuli across subjects and fMRI datasets, which
are often unavailable. In this paper, we propose an approach for computing
functional brain-to-brain transformations without any shared data, a feat not
previously achieved in functional transformations. This presents exciting
research prospects for merging and enriching diverse datasets, even when they
involve distinct stimuli that were collected using different fMRI machines of
varying resolutions (e.g., 3-Tesla and 7-Tesla). Our approach combines
brain-to-brain transformation with image-to-fMRI encoders, thus enabling to
learn functional transformations on stimuli to which subjects were never
exposed. Furthermore, we demonstrate the applicability of our method for
improving image-to-fMRI encoding of subjects scanned on older low-resolution 3T
fMRI datasets, by using a new high-resolution 7T fMRI dataset (scanned on
different subjects and different stimuli).
| [
{
"created": "Wed, 17 Apr 2024 07:39:57 GMT",
"version": "v1"
}
] | 2024-04-18 | [
[
"Wasserman",
"Navve",
""
],
[
"Beliy",
"Roman",
""
],
[
"Urbach",
"Roy",
""
],
[
"Irani",
"Michal",
""
]
] | Combining Functional MRI (fMRI) data across different subjects and datasets is crucial for many neuroscience tasks. Relying solely on shared anatomy for brain-to-brain mapping is inadequate. Existing functional transformation methods thus depend on shared stimuli across subjects and fMRI datasets, which are often unavailable. In this paper, we propose an approach for computing functional brain-to-brain transformations without any shared data, a feat not previously achieved in functional transformations. This presents exciting research prospects for merging and enriching diverse datasets, even when they involve distinct stimuli that were collected using different fMRI machines of varying resolutions (e.g., 3-Tesla and 7-Tesla). Our approach combines brain-to-brain transformation with image-to-fMRI encoders, thus enabling to learn functional transformations on stimuli to which subjects were never exposed. Furthermore, we demonstrate the applicability of our method for improving image-to-fMRI encoding of subjects scanned on older low-resolution 3T fMRI datasets, by using a new high-resolution 7T fMRI dataset (scanned on different subjects and different stimuli). |
0903.3719 | Zhou Tianshou | Jiajun Zhang, Zhanjiang Yuan, Tianshou Zhou | Cis-Regulatory Modules Drive Dynamic Patterns of a Multicellular System | 4 pages, 3 figures | null | null | null | q-bio.QM q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How intracellular and extracellular signals are integrated by transcription
factors is essential for understanding complex cellular patterns at the
population level. In this Letter, by using a synthetic genetic oscillator
coupled to a quorum-sensing apparatus, we propose an experimentally feasible
cis-regulatory module (CRM) which performs four possible logic operations
(ANDN, ORN, NOR and NAND) of input signals. We show both numerically and
theoretically that these different CRMs drive fundamentally different dynamic
patterns, such as synchronization, clustering and splay state.
| [
{
"created": "Sun, 22 Mar 2009 12:57:30 GMT",
"version": "v1"
}
] | 2009-03-24 | [
[
"Zhang",
"Jiajun",
""
],
[
"Yuan",
"Zhanjiang",
""
],
[
"Zhou",
"Tianshou",
""
]
] | How intracellular and extracellular signals are integrated by transcription factors is essential for understanding complex cellular patterns at the population level. In this Letter, by using a synthetic genetic oscillator coupled to a quorum-sensing apparatus, we propose an experimentally feasible cis-regulatory module (CRM) which performs four possible logic operations (ANDN, ORN, NOR and NAND) of input signals. We show both numerically and theoretically that these different CRMs drive fundamentally different dynamic patterns, such as synchronization, clustering and splay state. |
1908.08608 | Xiang Ji | Xiang Ji and Jeffrey L. Thorne | A phylogenetic approach disentangles interlocus gene conversion tract
length and initiation rate | 5 figures, 2 tables | null | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Interlocus gene conversion (IGC) homogenizes paralogs. Little is known
regarding the mutation events that cause IGC and even less is known about the
IGC mutations that experience fixation. To disentangle the rates of fixed IGC
mutations from the tract lengths of these fixed mutations, we employ a
composite likelihood procedure. We characterize the procedure with simulations.
We apply the procedure to duplicated primate introns and to protein-coding
paralogs from both yeast and primates. Our estimates from protein-coding data
concerning the mean length of fixed IGC tracts were unexpectedly low and are
associated with high degrees of uncertainty. In contrast, our estimates from
the primate intron data had lengths in the general range expected from IGC
mutation studies. While it is challenging to separate the rate at which fixed
IGC mutations initiate from the average number of nucleotide positions that
these IGC events affect, all of our analyses indicate that IGC is responsible
for a substantial proportion of evolutionary change in duplicated regions. Our
results suggest that IGC should be considered whenever the evolution of
multigene families is examined.
| [
{
"created": "Thu, 22 Aug 2019 21:55:11 GMT",
"version": "v1"
}
] | 2019-08-26 | [
[
"Ji",
"Xiang",
""
],
[
"Thorne",
"Jeffrey L.",
""
]
] | Interlocus gene conversion (IGC) homogenizes paralogs. Little is known regarding the mutation events that cause IGC and even less is known about the IGC mutations that experience fixation. To disentangle the rates of fixed IGC mutations from the tract lengths of these fixed mutations, we employ a composite likelihood procedure. We characterize the procedure with simulations. We apply the procedure to duplicated primate introns and to protein-coding paralogs from both yeast and primates. Our estimates from protein-coding data concerning the mean length of fixed IGC tracts were unexpectedly low and are associated with high degrees of uncertainty. In contrast, our estimates from the primate intron data had lengths in the general range expected from IGC mutation studies. While it is challenging to separate the rate at which fixed IGC mutations initiate from the average number of nucleotide positions that these IGC events affect, all of our analyses indicate that IGC is responsible for a substantial proportion of evolutionary change in duplicated regions. Our results suggest that IGC should be considered whenever the evolution of multigene families is examined. |
2007.14065 | Essam Rashed | Sachiko Kodera, Essam A. Rashed, Akimasa Hirata | Correlation between COVID-19 morbidity and mortality rates in Japan and
local population density, temperature and absolute humidity | International Journal of Environmental Research and Public Health,
2020 | null | 10.3390/ijerph17155477 | null | q-bio.PE cs.CY physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study analyzed the morbidity and mortality rates of the COVID-19
pandemic in different prefectures of Japan. Under the constraint that daily
maximum confirmed deaths and daily maximum cases should exceed 4 and 10,
respectively, 14 prefectures were included, and cofactors affecting the
morbidity and mortality rates were evaluated. In particular, the number of
confirmed deaths was assessed excluding the cases of nosocomial infections and
nursing home patients. A mild correlation was observed between morbidity rate
and population density (R2=0.394). In addition, the percentage of the elderly
per population was also found to be non-negligible. Among weather parameters,
the maximum temperature and absolute humidity averaged over the duration were
found to be in modest correlation with the morbidity and mortality rates,
excluding the cases of nosocomial infections. The lower morbidity and mortality
are observed for higher temperature and absolute humidity. Multivariate
analysis considering these factors showed that determination coefficients for
the spread, decay, and combined stages were 0.708, 0.785, and 0.615,
respectively. These findings could be useful for intervention planning during
future pandemics, including a potential second COVID-19 outbreak.
| [
{
"created": "Tue, 28 Jul 2020 08:41:43 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Jul 2020 01:27:32 GMT",
"version": "v2"
}
] | 2020-08-04 | [
[
"Kodera",
"Sachiko",
""
],
[
"Rashed",
"Essam A.",
""
],
[
"Hirata",
"Akimasa",
""
]
] | This study analyzed the morbidity and mortality rates of the COVID-19 pandemic in different prefectures of Japan. Under the constraint that daily maximum confirmed deaths and daily maximum cases should exceed 4 and 10, respectively, 14 prefectures were included, and cofactors affecting the morbidity and mortality rates were evaluated. In particular, the number of confirmed deaths was assessed excluding the cases of nosocomial infections and nursing home patients. A mild correlation was observed between morbidity rate and population density (R2=0.394). In addition, the percentage of the elderly per population was also found to be non-negligible. Among weather parameters, the maximum temperature and absolute humidity averaged over the duration were found to be in modest correlation with the morbidity and mortality rates, excluding the cases of nosocomial infections. The lower morbidity and mortality are observed for higher temperature and absolute humidity. Multivariate analysis considering these factors showed that determination coefficients for the spread, decay, and combined stages were 0.708, 0.785, and 0.615, respectively. These findings could be useful for intervention planning during future pandemics, including a potential second COVID-19 outbreak. |
1606.08370 | Paul Smolen | Paul Smolen, Yili Zhang and John H. Byrne | The right time to learn: mechanisms and optimization of spaced learning | 34 pages, 5 figures | Nature Reviews Neuroscience 2016 Feb; 17(2):77-88 | 10.1038/nrn.2015.18 | null | q-bio.NC q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For many types of learning, spaced training that involves repeated long
inter-trial intervals (ITIs) leads to more robust memory formation than does
massed training that involves short or no intervals. Several cognitive theories
have been proposed to explain this superiority, but only recently has data
begun to delineate the underlying cellular and molecular mechanisms of spaced
training. We review these theories and data here. Computational models of the
implicated signaling cascades have predicted that spaced training with
irregular ITIs can enhance learning. This strategy of using models to predict
optimal spaced training protocols, combined with pharmacotherapy, suggests
novel ways to rescue impaired synaptic plasticity and learning.
| [
{
"created": "Mon, 27 Jun 2016 17:16:25 GMT",
"version": "v1"
}
] | 2016-06-28 | [
[
"Smolen",
"Paul",
""
],
[
"Zhang",
"Yili",
""
],
[
"Byrne",
"John H.",
""
]
] | For many types of learning, spaced training that involves repeated long inter-trial intervals (ITIs) leads to more robust memory formation than does massed training that involves short or no intervals. Several cognitive theories have been proposed to explain this superiority, but only recently has data begun to delineate the underlying cellular and molecular mechanisms of spaced training. We review these theories and data here. Computational models of the implicated signaling cascades have predicted that spaced training with irregular ITIs can enhance learning. This strategy of using models to predict optimal spaced training protocols, combined with pharmacotherapy, suggests novel ways to rescue impaired synaptic plasticity and learning. |
2004.12767 | Archana Devi | Kavita Jain and Archana Devi | Evolutionary dynamics and eigenspectrum of confluent Heun equation | null | J. Phys. A: Math. Theor. 53 (2020) 395602 | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a biological population evolving under the joint action of
selection, mutation and random genetic drift. The evolutionary dynamics are
described by a one-dimensional Fokker-Planck equation whose eigenfunctions obey
a confluent Heun equation. These eigenfunctions are expanded in an infinite
series of orthogonal Jacobi polynomials and the expansion coefficients are
found to obey a three-term recursion equation. Using scaling ideas, we obtain
an expression for the expansion coefficients and an analytical estimate of the
number of terms required in the series for an accurate determination of the
eigenfunction. The eigenvalue spectrum is studied using a perturbation theory
for weak selection and numerically for strong selection. In the latter case, we
find that the eigenvalue for the first excited state exhibits a sharp
transition: for mutation rate below one, the eigenvalue increases linearly with
increasing mutation rate and then remains a constant; higher eigenvalues are
found to display a more complex behavior.
| [
{
"created": "Mon, 27 Apr 2020 13:16:41 GMT",
"version": "v1"
}
] | 2022-03-22 | [
[
"Jain",
"Kavita",
""
],
[
"Devi",
"Archana",
""
]
] | We consider a biological population evolving under the joint action of selection, mutation and random genetic drift. The evolutionary dynamics are described by a one-dimensional Fokker-Planck equation whose eigenfunctions obey a confluent Heun equation. These eigenfunctions are expanded in an infinite series of orthogonal Jacobi polynomials and the expansion coefficients are found to obey a three-term recursion equation. Using scaling ideas, we obtain an expression for the expansion coefficients and an analytical estimate of the number of terms required in the series for an accurate determination of the eigenfunction. The eigenvalue spectrum is studied using a perturbation theory for weak selection and numerically for strong selection. In the latter case, we find that the eigenvalue for the first excited state exhibits a sharp transition: for mutation rate below one, the eigenvalue increases linearly with increasing mutation rate and then remains a constant; higher eigenvalues are found to display a more complex behavior. |
1104.1234 | Chih Lee | Chih Lee, Chun-Hsi Huang | Negative Example Aided Transcription Factor Binding Site Search | 14 pages, 16 figures | null | null | null | q-bio.GN stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computational approaches to transcription factor binding site identification
have been actively researched for the past decade.
Negative examples have long been utilized in de novo motif discovery and have
been shown useful in transcription factor binding site search as well.
However, understanding of the roles of negative examples in binding site
search is still very limited.
We propose the 2-centroid and optimal discriminating vector methods, taking
into account negative examples. Cross-validation results on E. coli
transcription factors show that the proposed methods benefit from negative
examples, outperforming the centroid and position-specific scoring matrix
methods. We further show that our proposed methods perform better than a
state-of-the-art method. We characterize the proposed methods in the context of
the other compared methods and show that, coupled with motif subtype
identification, the proposed methods can be effectively applied to a wide range
of transcription factors. Finally, we argue that the proposed methods are
well-suited for eukaryotic transcription factors as well.
Software tools are available at: http://biogrid.engr.uconn.edu/tfbs_search/.
| [
{
"created": "Thu, 7 Apr 2011 02:31:47 GMT",
"version": "v1"
}
] | 2015-03-19 | [
[
"Lee",
"Chih",
""
],
[
"Huang",
"Chun-Hsi",
""
]
] | Computational approaches to transcription factor binding site identification have been actively researched for the past decade. Negative examples have long been utilized in de novo motif discovery and have been shown useful in transcription factor binding site search as well. However, understanding of the roles of negative examples in binding site search is still very limited. We propose the 2-centroid and optimal discriminating vector methods, taking into account negative examples. Cross-validation results on E. coli transcription factors show that the proposed methods benefit from negative examples, outperforming the centroid and position-specific scoring matrix methods. We further show that our proposed methods perform better than a state-of-the-art method. We characterize the proposed methods in the context of the other compared methods and show that, coupled with motif subtype identification, the proposed methods can be effectively applied to a wide range of transcription factors. Finally, we argue that the proposed methods are well-suited for eukaryotic transcription factors as well. Software tools are available at: http://biogrid.engr.uconn.edu/tfbs_search/. |
2401.02789 | Marek Mutwil | Hilbert Yuen In Lam, Xing Er Ong, Marek Mutwil | Large Language Models in Plant Biology | null | null | null | null | q-bio.GN cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Large Language Models (LLMs), such as ChatGPT, have taken the world by storm
and have passed certain forms of the Turing test. However, LLMs are not limited
to human language and analyze sequential data, such as DNA, protein, and gene
expression. The resulting foundation models can be repurposed to identify the
complex patterns within the data, resulting in powerful, multi-purpose
prediction tools able to explain cellular systems. This review outlines the
different types of LLMs and showcases their recent uses in biology. Since LLMs
have not yet been embraced by the plant community, we also cover how these
models can be deployed for the plant kingdom.
| [
{
"created": "Fri, 5 Jan 2024 12:59:20 GMT",
"version": "v1"
}
] | 2024-01-08 | [
[
"Lam",
"Hilbert Yuen In",
""
],
[
"Ong",
"Xing Er",
""
],
[
"Mutwil",
"Marek",
""
]
] | Large Language Models (LLMs), such as ChatGPT, have taken the world by storm and have passed certain forms of the Turing test. However, LLMs are not limited to human language and analyze sequential data, such as DNA, protein, and gene expression. The resulting foundation models can be repurposed to identify the complex patterns within the data, resulting in powerful, multi-purpose prediction tools able to explain cellular systems. This review outlines the different types of LLMs and showcases their recent uses in biology. Since LLMs have not yet been embraced by the plant community, we also cover how these models can be deployed for the plant kingdom. |
1911.07388 | Armin Najarpour Foroushani | Armin Najarpour Foroushani, Sujaya Neupane, Pablo De Heredia Pastor,
Christopher C. Pack, and Mohamad Sawan | Spatial Resolution of Local Field Potential Signals in Macaque V4 | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A main challenge for the development of cortical visual prostheses is to
spatially localize individual spots of light, called phosphenes, by assigning
appropriate stimulating parameters to implanted electrodes. Imitating the
natural responses to phosphene-like stimuli at different positions can help in
designing a systematic procedure to determine these parameters. The key
characteristic of such a system is the ability to discriminate between
responses to different positions in the visual field. While most previous
prosthetic devices have targeted the primary visual cortex, the extrastriate
cortex has the advantage of covering a large part of the visual field with a
smaller amount of cortical tissue, providing the possibility of a more compact
implant. Here, we studied how well ensembles of Multiunit activity (MUA) and
Local Field Potentials (LFPs) responses from extrastriate cortical visual area
V4 of a behaving macaque monkey can discriminate between two-dimensional
spatial positions. We found that despite the large receptive field sizes in V4,
the combined responses from multiple sites, whether MUA or LFP, has the
capability for fine and coarse discrimination of positions. We identified a
selection procedure that could significantly increase the discrimination
performance while reducing the required number of electrodes. Analysis of noise
correlation in MUA and LFP responses showed that noise correlations in LFP
responses carry more information about the spatial positions. Overall, these
findings suggest that spatial positions could be localized with patterned
stimulation in extrastriate area V4.
| [
{
"created": "Mon, 18 Nov 2019 01:03:52 GMT",
"version": "v1"
}
] | 2019-11-19 | [
[
"Foroushani",
"Armin Najarpour",
""
],
[
"Neupane",
"Sujaya",
""
],
[
"Pastor",
"Pablo De Heredia",
""
],
[
"Pack",
"Christopher C.",
""
],
[
"Sawan",
"Mohamad",
""
]
] | A main challenge for the development of cortical visual prostheses is to spatially localize individual spots of light, called phosphenes, by assigning appropriate stimulating parameters to implanted electrodes. Imitating the natural responses to phosphene-like stimuli at different positions can help in designing a systematic procedure to determine these parameters. The key characteristic of such a system is the ability to discriminate between responses to different positions in the visual field. While most previous prosthetic devices have targeted the primary visual cortex, the extrastriate cortex has the advantage of covering a large part of the visual field with a smaller amount of cortical tissue, providing the possibility of a more compact implant. Here, we studied how well ensembles of Multiunit activity (MUA) and Local Field Potentials (LFPs) responses from extrastriate cortical visual area V4 of a behaving macaque monkey can discriminate between two-dimensional spatial positions. We found that despite the large receptive field sizes in V4, the combined responses from multiple sites, whether MUA or LFP, has the capability for fine and coarse discrimination of positions. We identified a selection procedure that could significantly increase the discrimination performance while reducing the required number of electrodes. Analysis of noise correlation in MUA and LFP responses showed that noise correlations in LFP responses carry more information about the spatial positions. Overall, these findings suggest that spatial positions could be localized with patterned stimulation in extrastriate area V4. |
1612.01104 | Partha Dutta | Yogita Sharma and Partha Sharathi Dutta | Regime shifts driven by dynamic correlations in gene expression noise | 14 pages, 14 figures | Phys. Rev. E 96, 022409 (2017) | 10.1103/PhysRevE.96.022409 | null | q-bio.MN nlin.AO q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gene expression is a noisy process that leads to regime shift between
alternative steady states among individual living cells, inducing phenotypic
variability. The effects of white noise on the regime shift in bistable systems
have been well characterized, however little is known about such effects of
colored noise (noise with non-zero correlation time). Here, we show that noise
correlation time, by considering a genetic circuit of autoactivation, can have
significant effect on the regime shift in gene expression. We demonstrate this
theoretically, using stochastic potential, stationary probability density
function and first-passage time based on the Fokker-Planck description, where
the Ornstein-Uhlenbeck process is used to model colored noise. We find that
increase in noise correlation time in degradation rate can induce a regime
shift from low to high protein concentration state and enhance the bistable
regime, while increase in noise correlation time in basal rate retain the
bimodal distribution. We then show how cross-correlated colored noises in basal
and degradation rates can induce regime shifts from low to high protein
concentration state, but reduce the bistable regime. In addition, we show that
early warning indicators can also be used to predict shifts between distinct
phenotypic states in gene expression. Predictions that a cell is about to shift
to a harmful phenotype could improve early therapeutic intervention in complex
human diseases.
| [
{
"created": "Sun, 4 Dec 2016 11:42:44 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Dec 2016 17:12:19 GMT",
"version": "v2"
}
] | 2017-08-23 | [
[
"Sharma",
"Yogita",
""
],
[
"Dutta",
"Partha Sharathi",
""
]
] | Gene expression is a noisy process that leads to regime shift between alternative steady states among individual living cells, inducing phenotypic variability. The effects of white noise on the regime shift in bistable systems have been well characterized, however little is known about such effects of colored noise (noise with non-zero correlation time). Here, we show that noise correlation time, by considering a genetic circuit of autoactivation, can have significant effect on the regime shift in gene expression. We demonstrate this theoretically, using stochastic potential, stationary probability density function and first-passage time based on the Fokker-Planck description, where the Ornstein-Uhlenbeck process is used to model colored noise. We find that increase in noise correlation time in degradation rate can induce a regime shift from low to high protein concentration state and enhance the bistable regime, while increase in noise correlation time in basal rate retain the bimodal distribution. We then show how cross-correlated colored noises in basal and degradation rates can induce regime shifts from low to high protein concentration state, but reduce the bistable regime. In addition, we show that early warning indicators can also be used to predict shifts between distinct phenotypic states in gene expression. Predictions that a cell is about to shift to a harmful phenotype could improve early therapeutic intervention in complex human diseases. |
0812.0160 | Razvan Radulescu M.D. | Razvan Tudor Radulescu | Tumor suppressor and anti-inflammatory protein: an expanded view on
insulin-degrading enzyme (IDE) | 5 pages, 2 figures | null | null | null | q-bio.BM q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In 1994, I conjectured that insulin-degrading enzyme (IDE) acts as an
inhibitor of malignant transformation by degrading insulin and thus preventing
this major growth-stimulatory hormone from binding and thereby inactivating the
retinoblastoma tumor suppressor protein (RB). Ten years later, I discovered
that a carboxyterminal RB amino acid sequence resembles the catalytic center of
IDE. This structural homology raised the possibility that insulin degradation
is a basic mechanism for tumor suppression shared by RB and IDE. Subsequently,
a first immunohistochemical study on the differential expression of human IDE
in normal tissues, primary tumors and their corresponding lymph node metastases
further corroborated the initial conjecture on IDE being an antineoplastic
molecule. In this report, it is shown that IDE harbors ankyrin repeat-like
amino acid sequences through which it might bind and, as a result, antagonize
the pro-inflammatory factor NF-kappaB as well as cyclin-dependent kinases
(CDKs). As equally revealed here, IDE also contains 2 RXL cyclin-binding motifs
which could contribute to its presumed inhibition of CDKs. These new findings
suggest that IDE is potentially able to suppress both inflammation and
oncogenesis by several mechanisms that ultimately ensure RB function.
| [
{
"created": "Sun, 30 Nov 2008 18:30:48 GMT",
"version": "v1"
}
] | 2008-12-02 | [
[
"Radulescu",
"Razvan Tudor",
""
]
] | In 1994, I conjectured that insulin-degrading enzyme (IDE) acts as an inhibitor of malignant transformation by degrading insulin and thus preventing this major growth-stimulatory hormone from binding and thereby inactivating the retinoblastoma tumor suppressor protein (RB). Ten years later, I discovered that a carboxyterminal RB amino acid sequence resembles the catalytic center of IDE. This structural homology raised the possibility that insulin degradation is a basic mechanism for tumor suppression shared by RB and IDE. Subsequently, a first immunohistochemical study on the differential expression of human IDE in normal tissues, primary tumors and their corresponding lymph node metastases further corroborated the initial conjecture on IDE being an antineoplastic molecule. In this report, it is shown that IDE harbors ankyrin repeat-like amino acid sequences through which it might bind and, as a result, antagonize the pro-inflammatory factor NF-kappaB as well as cyclin-dependent kinases (CDKs). As equally revealed here, IDE also contains 2 RXL cyclin-binding motifs which could contribute to its presumed inhibition of CDKs. These new findings suggest that IDE is potentially able to suppress both inflammation and oncogenesis by several mechanisms that ultimately ensure RB function. |
1809.04352 | Guo-Wei Wei | Rundong Zhao, Menglun Wang, Yiying Tong and Guo-Wei Wei | Divide-and-Conquer Strategy for Large-Scale Eulerian Solvent Excluded
Surface | 24 pages, 11 figures | Communications in Information and Systems, 2018 | null | null | q-bio.QM physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: Surface generation and visualization are some of the most
important tasks in biomolecular modeling and computation. Eulerian solvent
excluded surface (ESES) software provides analytical solvent excluded surface
(SES) in the Cartesian grid, which is necessary for simulating many
biomolecular electrostatic and ion channel models. However, large biomolecules
and/or fine grid resolutions give rise to excessively large memory requirements
in ESES construction. We introduce an out-of-core and parallel algorithm to
improve the ESES software.
Results: The present approach drastically improves the spatial and temporal
efficiency of ESES. The memory footprint and time complexity are analyzed and
empirically verified through extensive tests with a large collection of
biomolecule examples. Our results show that our algorithm can successfully
reduce memory footprint through a straightforward divide-and-conquer strategy
to perform the calculation of arbitrarily large proteins on a typical commodity
personal computer. On multi-core computers or clusters, our algorithm can
reduce the execution time by parallelizing most of the calculation as disjoint
subproblems. Various comparisons with the state-of-the-art Cartesian grid based
SES calculation were done to validate the present method and show the improved
efficiency. This approach makes ESES a robust software for the construction of
analytical solvent excluded surfaces.
Availability and implementation: http://weilab.math.msu.edu/ESES.
| [
{
"created": "Wed, 12 Sep 2018 10:35:31 GMT",
"version": "v1"
}
] | 2018-09-13 | [
[
"Zhao",
"Rundong",
""
],
[
"Wang",
"Menglun",
""
],
[
"Tong",
"Yiying",
""
],
[
"Wei",
"Guo-Wei",
""
]
] | Motivation: Surface generation and visualization are some of the most important tasks in biomolecular modeling and computation. Eulerian solvent excluded surface (ESES) software provides analytical solvent excluded surface (SES) in the Cartesian grid, which is necessary for simulating many biomolecular electrostatic and ion channel models. However, large biomolecules and/or fine grid resolutions give rise to excessively large memory requirements in ESES construction. We introduce an out-of-core and parallel algorithm to improve the ESES software. Results: The present approach drastically improves the spatial and temporal efficiency of ESES. The memory footprint and time complexity are analyzed and empirically verified through extensive tests with a large collection of biomolecule examples. Our results show that our algorithm can successfully reduce memory footprint through a straightforward divide-and-conquer strategy to perform the calculation of arbitrarily large proteins on a typical commodity personal computer. On multi-core computers or clusters, our algorithm can reduce the execution time by parallelizing most of the calculation as disjoint subproblems. Various comparisons with the state-of-the-art Cartesian grid based SES calculation were done to validate the present method and show the improved efficiency. This approach makes ESES a robust software for the construction of analytical solvent excluded surfaces. Availability and implementation: http://weilab.math.msu.edu/ESES. |
2305.00223 | Steven (Zvi) Lapp | Steven Zvi Lapp, Eli David, Nathan S. Netanyahu | PathRTM: Real-time prediction of KI-67 and tumor-infiltrated lymphocytes | 12 pages, 11 figures | null | null | null | q-bio.QM cs.CV cs.LG eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce PathRTM, a novel deep neural network detector
based on RTMDet, for automated KI-67 proliferation and tumor-infiltrated
lymphocyte estimation. KI-67 proliferation and tumor-infiltrated lymphocyte
estimation play a crucial role in cancer diagnosis and treatment. PathRTM is an
extension of the PathoNet work, which uses single pixel keypoints for within
each cell. We demonstrate that PathRTM, with higher-level supervision in the
form of bounding box labels generated automatically from the keypoints using
NuClick, can significantly improve KI-67 proliferation and tumorinfiltrated
lymphocyte estimation. Experiments on our custom dataset show that PathRTM
achieves state-of-the-art performance in KI-67 immunopositive, immunonegative,
and lymphocyte detection, with an average precision (AP) of 41.3%. Our results
suggest that PathRTM is a promising approach for accurate KI-67 proliferation
and tumor-infiltrated lymphocyte estimation, offering annotation efficiency,
accurate predictive capabilities, and improved runtime. The method also enables
estimation of cell sizes of interest, which was previously unavailable, through
the bounding box predictions.
| [
{
"created": "Sun, 23 Apr 2023 08:17:26 GMT",
"version": "v1"
}
] | 2023-05-03 | [
[
"Lapp",
"Steven Zvi",
""
],
[
"David",
"Eli",
""
],
[
"Netanyahu",
"Nathan S.",
""
]
] | In this paper, we introduce PathRTM, a novel deep neural network detector based on RTMDet, for automated KI-67 proliferation and tumor-infiltrated lymphocyte estimation. KI-67 proliferation and tumor-infiltrated lymphocyte estimation play a crucial role in cancer diagnosis and treatment. PathRTM is an extension of the PathoNet work, which uses single pixel keypoints for within each cell. We demonstrate that PathRTM, with higher-level supervision in the form of bounding box labels generated automatically from the keypoints using NuClick, can significantly improve KI-67 proliferation and tumorinfiltrated lymphocyte estimation. Experiments on our custom dataset show that PathRTM achieves state-of-the-art performance in KI-67 immunopositive, immunonegative, and lymphocyte detection, with an average precision (AP) of 41.3%. Our results suggest that PathRTM is a promising approach for accurate KI-67 proliferation and tumor-infiltrated lymphocyte estimation, offering annotation efficiency, accurate predictive capabilities, and improved runtime. The method also enables estimation of cell sizes of interest, which was previously unavailable, through the bounding box predictions. |
2003.10514 | Yujiang Wang | Yujiang Wang, Tobias Ludwig, Bethany Little, Joe H Necus, Gavin
Winston, Sjoerd B Vos, Jane de Tisi, John S Duncan, Peter N Taylor, Bruno
Mota | Independent components of human brain morphology | null | null | null | null | q-bio.NC physics.app-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantification of brain morphology has become an important cornerstone in
understanding brain structure. Measures of cortical morphology such as
thickness and surface area are frequently used to compare groups of subjects or
characterise longitudinal changes. However, such measures are often treated as
independent from each other.
A recently described scaling law, derived from a statistical physics model of
cortical folding, demonstrates that there is a tight covariance between three
commonly used cortical morphology measures: cortical thickness, total surface
area, and exposed surface area.
We show that assuming the independence of cortical morphology measures can
hide features and potentially lead to misinterpretations. Using the scaling
law, we account for the covariance between cortical morphology measures and
derive novel independent measures of cortical morphology. By applying these new
measures, we show that new information can be gained; in our example we show
that distinct morphological alterations underlie healthy ageing compared to
temporal lobe epilepsy, even on the coarse level of a whole hemisphere.
We thus provide a conceptual framework for characterising cortical morphology
in a statistically valid and interpretable manner, based on theoretical
reasoning about the shape of the cortex.
| [
{
"created": "Mon, 23 Mar 2020 19:53:48 GMT",
"version": "v1"
}
] | 2020-03-25 | [
[
"Wang",
"Yujiang",
""
],
[
"Ludwig",
"Tobias",
""
],
[
"Little",
"Bethany",
""
],
[
"Necus",
"Joe H",
""
],
[
"Winston",
"Gavin",
""
],
[
"Vos",
"Sjoerd B",
""
],
[
"de Tisi",
"Jane",
""
],
[
"Duncan",
"John S",
""
],
[
"Taylor",
"Peter N",
""
],
[
"Mota",
"Bruno",
""
]
] | Quantification of brain morphology has become an important cornerstone in understanding brain structure. Measures of cortical morphology such as thickness and surface area are frequently used to compare groups of subjects or characterise longitudinal changes. However, such measures are often treated as independent from each other. A recently described scaling law, derived from a statistical physics model of cortical folding, demonstrates that there is a tight covariance between three commonly used cortical morphology measures: cortical thickness, total surface area, and exposed surface area. We show that assuming the independence of cortical morphology measures can hide features and potentially lead to misinterpretations. Using the scaling law, we account for the covariance between cortical morphology measures and derive novel independent measures of cortical morphology. By applying these new measures, we show that new information can be gained; in our example we show that distinct morphological alterations underlie healthy ageing compared to temporal lobe epilepsy, even on the coarse level of a whole hemisphere. We thus provide a conceptual framework for characterising cortical morphology in a statistically valid and interpretable manner, based on theoretical reasoning about the shape of the cortex. |
2401.03571 | Liming Cai | Sixiang Zhang, Aaron J. Yang, and Liming Cai | {\alpha}-HMM: A Graphical Model for RNA Folding | 14 pages, 5 figures, 1 table | null | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | RNA secondary structure is modeled with the novel arbitrary-order hidden
Markov model ({\alpha}-HMM). The {\alpha}-HMM extends over the traditional HMM
with capability to model stochastic events that may be in influenced by
historically distant ones, making it suitable to account for long-range
canonical base pairings between nucleotides, which constitute the RNA secondary
structure. Unlike previous heavy-weight extensions over HMM, the {\alpha}-HMM
has the flexibility to apply restrictions on how one event may influence
another in stochastic processes, enabling efficient prediction of RNA secondary
structure including pseudoknots.
| [
{
"created": "Sun, 7 Jan 2024 19:43:30 GMT",
"version": "v1"
}
] | 2024-01-09 | [
[
"Zhang",
"Sixiang",
""
],
[
"Yang",
"Aaron J.",
""
],
[
"Cai",
"Liming",
""
]
] | RNA secondary structure is modeled with the novel arbitrary-order hidden Markov model ({\alpha}-HMM). The {\alpha}-HMM extends over the traditional HMM with capability to model stochastic events that may be in influenced by historically distant ones, making it suitable to account for long-range canonical base pairings between nucleotides, which constitute the RNA secondary structure. Unlike previous heavy-weight extensions over HMM, the {\alpha}-HMM has the flexibility to apply restrictions on how one event may influence another in stochastic processes, enabling efficient prediction of RNA secondary structure including pseudoknots. |
1704.02846 | Sael Lee | Jaya Thomas and Lee Sael | Multi-Kernel LS-SVM Based Bio-Clinical Data Integration: Applications to
Ovarian Cancer | 27 pages, 7 figures, extends the work presented in 6th International
Conference on Emerging Databases, accepted for publication in the IJDBM | null | null | null | q-bio.GN q-bio.QM stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The medical research facilitates to acquire a diverse type of data from the
same individual for particular cancer. Recent studies show that utilizing such
diverse data results in more accurate predictions. The major challenge faced is
how to utilize such diverse data sets in an effective way. In this paper, we
introduce a multiple kernel based pipeline for integrative analysis of
high-throughput molecular data (somatic mutation, copy number alteration, DNA
methylation and mRNA) and clinical data. We apply the pipeline on Ovarian
cancer data from TCGA. After multiple kernels have been generated from the
weighted sum of individual kernels, it is used to stratify patients and predict
clinical outcomes. We examine the survival time, vital status, and neoplasm
cancer status of each subtype to verify how well they cluster. We have also
examined the power of molecular and clinical data in predicting dichotomized
overall survival data and to classify the tumor grade for the cancer samples.
It was observed that the integration of various data types yields higher
log-rank statistics value. We were also able to predict clinical status with
higher accuracy as compared to using individual data types.
| [
{
"created": "Mon, 10 Apr 2017 13:15:36 GMT",
"version": "v1"
},
{
"created": "Tue, 10 Oct 2017 05:17:06 GMT",
"version": "v2"
}
] | 2017-10-11 | [
[
"Thomas",
"Jaya",
""
],
[
"Sael",
"Lee",
""
]
] | The medical research facilitates to acquire a diverse type of data from the same individual for particular cancer. Recent studies show that utilizing such diverse data results in more accurate predictions. The major challenge faced is how to utilize such diverse data sets in an effective way. In this paper, we introduce a multiple kernel based pipeline for integrative analysis of high-throughput molecular data (somatic mutation, copy number alteration, DNA methylation and mRNA) and clinical data. We apply the pipeline on Ovarian cancer data from TCGA. After multiple kernels have been generated from the weighted sum of individual kernels, it is used to stratify patients and predict clinical outcomes. We examine the survival time, vital status, and neoplasm cancer status of each subtype to verify how well they cluster. We have also examined the power of molecular and clinical data in predicting dichotomized overall survival data and to classify the tumor grade for the cancer samples. It was observed that the integration of various data types yields higher log-rank statistics value. We were also able to predict clinical status with higher accuracy as compared to using individual data types. |
1801.00030 | Hossein Babashah | Ehsan Maleki, Hossein Babashah, Somayyeh Koohi, Zahra Kavehvash | High Speed All-optical extended DV-Curve-based DNA sequence alignment
utilizing wavelength and polarization modulation | null | null | null | null | q-bio.QM q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a novel optical processing approach for exploring genome
sequences built upon optical correlator for global alignment and extended
DV-curve method for local alignment. To overcome the problem of traditional
DV-curve method for presenting an accurate and simplified output, we propose
HAWPOD, built upon DV- curve method, to analyze genome sequences in five steps:
DNA coding, alignment, noise cancellation, simplification, and modification.
Moreover, all-optical implementation of the HAWPOD method is developed, while
its accuracy is validated through numerical simulations in LUMERICAL FDTD. The
results express the proposed method is much faster than its electrical
counterparts, such as Basic Local Alignment Search Tools.
| [
{
"created": "Wed, 27 Dec 2017 01:19:25 GMT",
"version": "v1"
}
] | 2018-01-03 | [
[
"Maleki",
"Ehsan",
""
],
[
"Babashah",
"Hossein",
""
],
[
"Koohi",
"Somayyeh",
""
],
[
"Kavehvash",
"Zahra",
""
]
] | This paper presents a novel optical processing approach for exploring genome sequences built upon optical correlator for global alignment and extended DV-curve method for local alignment. To overcome the problem of traditional DV-curve method for presenting an accurate and simplified output, we propose HAWPOD, built upon DV- curve method, to analyze genome sequences in five steps: DNA coding, alignment, noise cancellation, simplification, and modification. Moreover, all-optical implementation of the HAWPOD method is developed, while its accuracy is validated through numerical simulations in LUMERICAL FDTD. The results express the proposed method is much faster than its electrical counterparts, such as Basic Local Alignment Search Tools. |
0705.1389 | Yurie Okabe | Yurie Okabe, Yuu Yagi, and Masaki Sasai | Effects of the DNA state fluctuation on single-cell dynamics of
self-regulating gene | 18 pages, 5 figures | null | 10.1063/1.2768353 | null | q-bio.MN q-bio.QM | null | A dynamical mean-field theory is developed to analyze stochastic single-cell
dynamics of gene expression. By explicitly taking account of nonequilibrium and
nonadiabatic features of the DNA state fluctuation, two-time correlation
functions and response functions of single-cell dynamics are derived. The
method is applied to a self-regulating gene to predict a rich variety of
dynamical phenomena such as anomalous increase of relaxation time and
oscillatory decay of correlations. Effective "temperature" defined as the ratio
of the correlation to the response in the protein number is small when the DNA
state change is frequent, while it grows large when the DNA state change is
infrequent, indicating the strong enhancement of noise in the latter case.
| [
{
"created": "Thu, 10 May 2007 05:50:16 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Okabe",
"Yurie",
""
],
[
"Yagi",
"Yuu",
""
],
[
"Sasai",
"Masaki",
""
]
] | A dynamical mean-field theory is developed to analyze stochastic single-cell dynamics of gene expression. By explicitly taking account of nonequilibrium and nonadiabatic features of the DNA state fluctuation, two-time correlation functions and response functions of single-cell dynamics are derived. The method is applied to a self-regulating gene to predict a rich variety of dynamical phenomena such as anomalous increase of relaxation time and oscillatory decay of correlations. Effective "temperature" defined as the ratio of the correlation to the response in the protein number is small when the DNA state change is frequent, while it grows large when the DNA state change is infrequent, indicating the strong enhancement of noise in the latter case. |
0704.3005 | Yasser Roudi | Yasser Roudi, Peter E. Latham | A balanced memory network | Accepted for publications in PLoS Comp. Biol | null | 10.1371/journal.pcbi.0030141 | null | q-bio.NC cond-mat.dis-nn | null | A fundamental problem in neuroscience is understanding how working memory --
the ability to store information at intermediate timescales, like 10s of
seconds -- is implemented in realistic neuronal networks. The most likely
candidate mechanism is the attractor network, and a great deal of effort has
gone toward investigating it theoretically. Yet, despite almost a quarter
century of intense work, attractor networks are not fully understood. In
particular, there are still two unanswered questions. First, how is it that
attractor networks exhibit irregular firing, as is observed experimentally
during working memory tasks? And second, how many memories can be stored under
biologically realistic conditions? Here we answer both questions by studying an
attractor neural network in which inhibition and excitation balance each other.
Using mean field analysis, we derive a three-variable description of attractor
networks. From this description it follows that irregular firing can exist only
if the number of neurons involved in a memory is large. The same mean field
analysis also shows that the number of memories that can be stored in a network
scales with the number of excitatory connections, a result that has been
suggested for simple models but never shown for realistic ones. Both of these
predictions are verified using simulations with large networks of spiking
neurons.
| [
{
"created": "Mon, 23 Apr 2007 13:45:38 GMT",
"version": "v1"
}
] | 2015-05-13 | [
[
"Roudi",
"Yasser",
""
],
[
"Latham",
"Peter E.",
""
]
] | A fundamental problem in neuroscience is understanding how working memory -- the ability to store information at intermediate timescales, like 10s of seconds -- is implemented in realistic neuronal networks. The most likely candidate mechanism is the attractor network, and a great deal of effort has gone toward investigating it theoretically. Yet, despite almost a quarter century of intense work, attractor networks are not fully understood. In particular, there are still two unanswered questions. First, how is it that attractor networks exhibit irregular firing, as is observed experimentally during working memory tasks? And second, how many memories can be stored under biologically realistic conditions? Here we answer both questions by studying an attractor neural network in which inhibition and excitation balance each other. Using mean field analysis, we derive a three-variable description of attractor networks. From this description it follows that irregular firing can exist only if the number of neurons involved in a memory is large. The same mean field analysis also shows that the number of memories that can be stored in a network scales with the number of excitatory connections, a result that has been suggested for simple models but never shown for realistic ones. Both of these predictions are verified using simulations with large networks of spiking neurons. |
1503.07610 | Yanping Liu | Yanping Liu, Erik D. Reichle, Ren Huang | Eye-Movement Control During the Reading of Chinese: An Analysis Using
the Landolt-C Paradigm | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Participants in an eye-movement experiment performed a modified version of
the Landolt-C paradigm (Williams & Pollatsek, 2007) in which they searched for
target squares embedded in linear arrays of spatially contiguous "words" (i.e.,
short sequences of squares having missing segments of variable size and
orientation). Although the distributions of single- and first-of-multiple
fixation locations replicated previous patterns suggesting saccade targeting
(e.g., Yan, Kliegl, Richter, Nuthmann, & Shu, 2010), the distribution of all
forward fixation locations was uniform, suggesting the absence of specific
saccade targets. Furthermore, properties of the "words" (e.g., gap size) also
influenced fixation durations and forward saccade length, suggesting that
on-going processing affects decisions about when and where (i.e., how far) to
move the eyes. The theoretical implications of these results for existing and
future accounts of eye-movement control are discussed.
| [
{
"created": "Thu, 26 Mar 2015 03:44:43 GMT",
"version": "v1"
}
] | 2015-03-27 | [
[
"Liu",
"Yanping",
""
],
[
"Reichle",
"Erik D.",
""
],
[
"Huang",
"Ren",
""
]
] | Participants in an eye-movement experiment performed a modified version of the Landolt-C paradigm (Williams & Pollatsek, 2007) in which they searched for target squares embedded in linear arrays of spatially contiguous "words" (i.e., short sequences of squares having missing segments of variable size and orientation). Although the distributions of single- and first-of-multiple fixation locations replicated previous patterns suggesting saccade targeting (e.g., Yan, Kliegl, Richter, Nuthmann, & Shu, 2010), the distribution of all forward fixation locations was uniform, suggesting the absence of specific saccade targets. Furthermore, properties of the "words" (e.g., gap size) also influenced fixation durations and forward saccade length, suggesting that on-going processing affects decisions about when and where (i.e., how far) to move the eyes. The theoretical implications of these results for existing and future accounts of eye-movement control are discussed. |
2405.14225 | Zhiyuan Liu | Zhiyuan Liu, Yaorui Shi, An Zhang, Sihang Li, Enzhi Zhang, Xiang Wang,
Kenji Kawaguchi, Tat-Seng Chua | ReactXT: Understanding Molecular "Reaction-ship" via
Reaction-Contextualized Molecule-Text Pretraining | ACL 2024 Findings, 9 pages | null | null | null | q-bio.QM cs.CL cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Molecule-text modeling, which aims to facilitate molecule-relevant tasks with
a textual interface and textual knowledge, is an emerging research direction.
Beyond single molecules, studying reaction-text modeling holds promise for
helping the synthesis of new materials and drugs. However, previous works
mostly neglect reaction-text modeling: they primarily focus on modeling
individual molecule-text pairs or learning chemical reactions without texts in
context. Additionally, one key task of reaction-text modeling -- experimental
procedure prediction -- is less explored due to the absence of an open-source
dataset. The task is to predict step-by-step actions of conducting chemical
experiments and is crucial to automating chemical synthesis. To resolve the
challenges above, we propose a new pretraining method, ReactXT, for
reaction-text modeling, and a new dataset, OpenExp, for experimental procedure
prediction. Specifically, ReactXT features three types of input contexts to
incrementally pretrain LMs. Each of the three input contexts corresponds to a
pretraining task to improve the text-based understanding of either reactions or
single molecules. ReactXT demonstrates consistent improvements in experimental
procedure prediction and molecule captioning and offers competitive results in
retrosynthesis. Our code is available at https://github.com/syr-cn/ReactXT.
| [
{
"created": "Thu, 23 May 2024 06:55:59 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Liu",
"Zhiyuan",
""
],
[
"Shi",
"Yaorui",
""
],
[
"Zhang",
"An",
""
],
[
"Li",
"Sihang",
""
],
[
"Zhang",
"Enzhi",
""
],
[
"Wang",
"Xiang",
""
],
[
"Kawaguchi",
"Kenji",
""
],
[
"Chua",
"Tat-Seng",
""
]
] | Molecule-text modeling, which aims to facilitate molecule-relevant tasks with a textual interface and textual knowledge, is an emerging research direction. Beyond single molecules, studying reaction-text modeling holds promise for helping the synthesis of new materials and drugs. However, previous works mostly neglect reaction-text modeling: they primarily focus on modeling individual molecule-text pairs or learning chemical reactions without texts in context. Additionally, one key task of reaction-text modeling -- experimental procedure prediction -- is less explored due to the absence of an open-source dataset. The task is to predict step-by-step actions of conducting chemical experiments and is crucial to automating chemical synthesis. To resolve the challenges above, we propose a new pretraining method, ReactXT, for reaction-text modeling, and a new dataset, OpenExp, for experimental procedure prediction. Specifically, ReactXT features three types of input contexts to incrementally pretrain LMs. Each of the three input contexts corresponds to a pretraining task to improve the text-based understanding of either reactions or single molecules. ReactXT demonstrates consistent improvements in experimental procedure prediction and molecule captioning and offers competitive results in retrosynthesis. Our code is available at https://github.com/syr-cn/ReactXT. |
1702.05129 | Edgardo Brigatti | E. Brigatti, M. V. Vieira, M. Kajin, P. J. A. L. Almeida, M. A. de
Menezes, and R. Cerqueira | Detecting and modelling delayed density-dependence in abundance time
series of a small mammal (Didelphis aurita) | 8 pages, 5 figures | Sci. Rep. 6, 19553 (2016) | 10.1038/srep19553 | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the population size time series of a Neotropical small mammal with
the intent of detecting and modelling population regulation processes generated
by density-dependent factors and their possible delayed effects. The
application of analysis tools based on principles of statistical generality are
nowadays a common practice for describing these phenomena, but, in general,
they are more capable of generating clear diagnosis rather than granting
valuable modelling. For this reason, in our approach, we detect the principal
temporal structures on the bases of different correlation measures, and from
these results we build an ad-hoc minimalist autoregressive model that
incorporates the main drivers of the dynamics. Surprisingly our model is
capable of reproducing very well the time patterns of the empirical series and,
for the first time, clearly outlines the importance of the time of attaining
sexual maturity as a central temporal scale for the dynamics of this species.
In fact, an important advantage of this analysis scheme is that all the model
parameters are directly biologically interpretable and potentially measurable,
allowing a consistency check between model outputs and independent
measurements.
| [
{
"created": "Thu, 16 Feb 2017 19:46:01 GMT",
"version": "v1"
}
] | 2017-02-20 | [
[
"Brigatti",
"E.",
""
],
[
"Vieira",
"M. V.",
""
],
[
"Kajin",
"M.",
""
],
[
"Almeida",
"P. J. A. L.",
""
],
[
"de Menezes",
"M. A.",
""
],
[
"Cerqueira",
"R.",
""
]
] | We study the population size time series of a Neotropical small mammal with the intent of detecting and modelling population regulation processes generated by density-dependent factors and their possible delayed effects. The application of analysis tools based on principles of statistical generality are nowadays a common practice for describing these phenomena, but, in general, they are more capable of generating clear diagnosis rather than granting valuable modelling. For this reason, in our approach, we detect the principal temporal structures on the bases of different correlation measures, and from these results we build an ad-hoc minimalist autoregressive model that incorporates the main drivers of the dynamics. Surprisingly our model is capable of reproducing very well the time patterns of the empirical series and, for the first time, clearly outlines the importance of the time of attaining sexual maturity as a central temporal scale for the dynamics of this species. In fact, an important advantage of this analysis scheme is that all the model parameters are directly biologically interpretable and potentially measurable, allowing a consistency check between model outputs and independent measurements. |
2007.13813 | Tawan Carvalho | Tawan T. A. Carvalho, Antonio J. Fontenele, Mauricio Girardi-Schappo,
Thais Feliciano, Leandro A. A. Aguiar, Thais P. L. Silva, Nivaldo A. P. de
Vasconcelos, Pedro V. Carelli, and Mauro Copelli | Subsampled directed-percolation models explain scaling relations
experimentally observed in the brain | 15 pages, 9 figures, submitted to Frontiers Neural Circuits | Front. Neural Circuits 14, 83 (2021) | 10.3389/fncir.2020.576727 | null | q-bio.NC cond-mat.dis-nn cond-mat.stat-mech nlin.AO physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | Recent experimental results on spike avalanches measured in the
urethane-anesthetized rat cortex have revealed scaling relations that indicate
a phase transition at a specific level of cortical firing rate variability. The
scaling relations point to critical exponents whose values differ from those of
a branching process, which has been the canonical model employed to understand
brain criticality. This suggested that a different model, with a different
phase transition, might be required to explain the data. Here we show that this
is not necessarily the case. By employing two different models belonging to the
same universality class as the branching process (mean-field directed
percolation) and treating the simulation data exactly like experimental data,
we reproduce most of the experimental results. We find that subsampling the
model and adjusting the time bin used to define avalanches (as done with
experimental data) are sufficient ingredients to change the apparent exponents
of the critical point. Moreover, experimental data is only reproduced within a
very narrow range in parameter space around the phase transition.
| [
{
"created": "Mon, 27 Jul 2020 18:59:46 GMT",
"version": "v1"
}
] | 2021-01-18 | [
[
"Carvalho",
"Tawan T. A.",
""
],
[
"Fontenele",
"Antonio J.",
""
],
[
"Girardi-Schappo",
"Mauricio",
""
],
[
"Feliciano",
"Thais",
""
],
[
"Aguiar",
"Leandro A. A.",
""
],
[
"Silva",
"Thais P. L.",
""
],
[
"de Vasconcelos",
"Nivaldo A. P.",
""
],
[
"Carelli",
"Pedro V.",
""
],
[
"Copelli",
"Mauro",
""
]
] | Recent experimental results on spike avalanches measured in the urethane-anesthetized rat cortex have revealed scaling relations that indicate a phase transition at a specific level of cortical firing rate variability. The scaling relations point to critical exponents whose values differ from those of a branching process, which has been the canonical model employed to understand brain criticality. This suggested that a different model, with a different phase transition, might be required to explain the data. Here we show that this is not necessarily the case. By employing two different models belonging to the same universality class as the branching process (mean-field directed percolation) and treating the simulation data exactly like experimental data, we reproduce most of the experimental results. We find that subsampling the model and adjusting the time bin used to define avalanches (as done with experimental data) are sufficient ingredients to change the apparent exponents of the critical point. Moreover, experimental data is only reproduced within a very narrow range in parameter space around the phase transition. |
2108.06684 | Nadav Brandes | Nadav Brandes, Omer Weissbrod, Michal Linial | Open Problems in Human Trait Genetics | null | null | null | null | q-bio.PE q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | Genetic studies of human traits have revolutionized our understanding of the
variation between individuals, and opened the door for numerous breakthroughs
in biology, medicine and other scientific fields. And yet, the ultimate promise
of this area of research is still not fully realized. In this review, we
highlight the major open problems that need to be solved to improve our
understanding of the genetic variation underlying human traits, and by
discussing these challenges provide a primer to the field. Our focus is on
concrete analytical problems, both conceptual and technical in nature. We cover
general issues in genetic studies such as population structure, epistasis and
gene-environment interactions, data-related issues such as ethnic diversity and
rare genetic variants, and specific challenges related to heritability
estimates, genetic association studies and polygenic risk scores. We emphasize
the interconnectedness of these open problems and suggest promising avenues to
address them.
| [
{
"created": "Sun, 15 Aug 2021 07:56:49 GMT",
"version": "v1"
}
] | 2021-08-29 | [
[
"Brandes",
"Nadav",
""
],
[
"Weissbrod",
"Omer",
""
],
[
"Linial",
"Michal",
""
]
] | Genetic studies of human traits have revolutionized our understanding of the variation between individuals, and opened the door for numerous breakthroughs in biology, medicine and other scientific fields. And yet, the ultimate promise of this area of research is still not fully realized. In this review, we highlight the major open problems that need to be solved to improve our understanding of the genetic variation underlying human traits, and by discussing these challenges provide a primer to the field. Our focus is on concrete analytical problems, both conceptual and technical in nature. We cover general issues in genetic studies such as population structure, epistasis and gene-environment interactions, data-related issues such as ethnic diversity and rare genetic variants, and specific challenges related to heritability estimates, genetic association studies and polygenic risk scores. We emphasize the interconnectedness of these open problems and suggest promising avenues to address them. |
0901.2867 | Bernhard Mehlig | A. Eriksson, B. Mahjani, and B. Mehlig | Sequential Markov coalescent algorithms for population models with
demographic structure | 10 pages, 7 figures | Theor. Pop. Biol. 76(2), 84 (2009) | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyse sequential Markov coalescent algorithms for populations with
demographic structure: for a bottleneck model, a population-divergence model,
and for a two-island model with migration. The sequential Markov coalescent
method is an approximation to the coalescent suggested by McVean and Cardin,
and Marjoram and Wall. Within this algorithm we compute, for two individuals
randomly sampled from the population, the correlation between times to the most
recent common ancestor and the linkage probability corresponding to two
different loci with recombination rate R between them. We find that the
sequential Markov coalescent method approximates the coalescent well in general
in models with demographic structure. An exception is the case where
individuals are sampled from populations separated by reduced gene flow. In
this situation, the gene-history correlations may be significantly
underestimated. We explain why this is the case.
| [
{
"created": "Mon, 19 Jan 2009 15:26:29 GMT",
"version": "v1"
}
] | 2012-06-13 | [
[
"Eriksson",
"A.",
""
],
[
"Mahjani",
"B.",
""
],
[
"Mehlig",
"B.",
""
]
] | We analyse sequential Markov coalescent algorithms for populations with demographic structure: for a bottleneck model, a population-divergence model, and for a two-island model with migration. The sequential Markov coalescent method is an approximation to the coalescent suggested by McVean and Cardin, and Marjoram and Wall. Within this algorithm we compute, for two individuals randomly sampled from the population, the correlation between times to the most recent common ancestor and the linkage probability corresponding to two different loci with recombination rate R between them. We find that the sequential Markov coalescent method approximates the coalescent well in general in models with demographic structure. An exception is the case where individuals are sampled from populations separated by reduced gene flow. In this situation, the gene-history correlations may be significantly underestimated. We explain why this is the case. |
1702.00101 | Danielle Bassett | Elisabeth A. Karuza, Ari E. Kahn, Sharon L. Thompson-Schill, and
Danielle S. Bassett | Process reveals structure: How a network is traversed mediates
expectations about its architecture | 22 pages, 2 figures, 1 table, plus supplement | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Network science has emerged as a powerful tool through which we can study the
higher-order architectural properties of the world around us. How human
learners exploit this information remains an essential question. Here, we focus
on the temporal constraints that govern such a process. Participants viewed a
continuous sequence of images generated by three distinct walks on a modular
network. Walks varied along two critical dimensions: their predictability and
the density with which they sampled from communities of images. Learners
exposed to walks that richly sampled from each community exhibited a sharp
increase in processing time upon entry into a new community. This effect was
eliminated in a highly regular walk that sampled exhaustively from images in
short, successive cycles (i.e., that increasingly minimized uncertainty about
the nature of upcoming stimuli). These results demonstrate that temporal
organization plays an essential role in how robustly knowledge of network
architecture is acquired.
| [
{
"created": "Wed, 1 Feb 2017 01:33:43 GMT",
"version": "v1"
}
] | 2017-02-02 | [
[
"Karuza",
"Elisabeth A.",
""
],
[
"Kahn",
"Ari E.",
""
],
[
"Thompson-Schill",
"Sharon L.",
""
],
[
"Bassett",
"Danielle S.",
""
]
] | Network science has emerged as a powerful tool through which we can study the higher-order architectural properties of the world around us. How human learners exploit this information remains an essential question. Here, we focus on the temporal constraints that govern such a process. Participants viewed a continuous sequence of images generated by three distinct walks on a modular network. Walks varied along two critical dimensions: their predictability and the density with which they sampled from communities of images. Learners exposed to walks that richly sampled from each community exhibited a sharp increase in processing time upon entry into a new community. This effect was eliminated in a highly regular walk that sampled exhaustively from images in short, successive cycles (i.e., that increasingly minimized uncertainty about the nature of upcoming stimuli). These results demonstrate that temporal organization plays an essential role in how robustly knowledge of network architecture is acquired. |
2112.04013 | Usman Mahmood | Usman Mahmood, Zening Fu, Vince Calhoun, Sergey Plis | A deep learning model for data-driven discovery of functional
connectivity | Accepted at Algorithms 2021, 14(3), 75 | Algorithms 2021, 14(3), 75 | 10.3390/a14030075 | null | q-bio.NC cs.LG | http://creativecommons.org/licenses/by/4.0/ | Functional connectivity (FC) studies have demonstrated the overarching value
of studying the brain and its disorders through the undirected weighted graph
of fMRI correlation matrix. Most of the work with the FC, however, depends on
the way the connectivity is computed, and further depends on the manual
post-hoc analysis of the FC matrices. In this work we propose a deep learning
architecture BrainGNN that learns the connectivity structure as part of
learning to classify subjects. It simultaneously applies a graphical neural
network to this learned graph and learns to select a sparse subset of brain
regions important to the prediction task. We demonstrate the model's
state-of-the-art classification performance on a schizophrenia fMRI dataset and
demonstrate how introspection leads to disorder relevant findings. The graphs
learned by the model exhibit strong class discrimination and the sparse subset
of relevant regions are consistent with the schizophrenia literature.
| [
{
"created": "Tue, 7 Dec 2021 21:57:32 GMT",
"version": "v1"
}
] | 2021-12-09 | [
[
"Mahmood",
"Usman",
""
],
[
"Fu",
"Zening",
""
],
[
"Calhoun",
"Vince",
""
],
[
"Plis",
"Sergey",
""
]
] | Functional connectivity (FC) studies have demonstrated the overarching value of studying the brain and its disorders through the undirected weighted graph of fMRI correlation matrix. Most of the work with the FC, however, depends on the way the connectivity is computed, and further depends on the manual post-hoc analysis of the FC matrices. In this work we propose a deep learning architecture BrainGNN that learns the connectivity structure as part of learning to classify subjects. It simultaneously applies a graphical neural network to this learned graph and learns to select a sparse subset of brain regions important to the prediction task. We demonstrate the model's state-of-the-art classification performance on a schizophrenia fMRI dataset and demonstrate how introspection leads to disorder relevant findings. The graphs learned by the model exhibit strong class discrimination and the sparse subset of relevant regions are consistent with the schizophrenia literature. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.