id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2401.00037 | Liang Huang | Ning Dai, Wei Yu Tang, Tianshuo Zhou, David H. Mathews, Liang Huang | Messenger RNA Design via Expected Partition Function and Continuous
Optimization | null | null | null | null | q-bio.BM cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | The tasks of designing RNAs are discrete optimization problems, and several
versions of these problems are NP-hard. As an alternative to commonly used
local search methods, we formulate these problems as continuous optimization
and develop a general framework for this optimization based on a generalization
of classical partition function which we call "expected partition function".
The basic idea is to start with a distribution over all possible candidate
sequences, and extend the objective function from a sequence to a distribution.
We then use gradient descent-based optimization methods to improve the extended
objective function, and the distribution will gradually shrink towards a
one-hot sequence (i.e., a single sequence). As a case study, we consider the
important problem of mRNA design with wide applications in vaccines and
therapeutics. While the recent work of LinearDesign can efficiently optimize
mRNAs for minimum free energy (MFE), optimizing for ensemble free energy is
much harder and likely intractable. Our approach can consistently improve over
the LinearDesign solution in terms of ensemble free energy, with bigger
improvements on longer sequences.
| [
{
"created": "Fri, 29 Dec 2023 18:37:38 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Mar 2024 18:01:10 GMT",
"version": "v2"
}
] | 2024-03-04 | [
[
"Dai",
"Ning",
""
],
[
"Tang",
"Wei Yu",
""
],
[
"Zhou",
"Tianshuo",
""
],
[
"Mathews",
"David H.",
""
],
[
"Huang",
"Liang",
""
]
] | The tasks of designing RNAs are discrete optimization problems, and several versions of these problems are NP-hard. As an alternative to commonly used local search methods, we formulate these problems as continuous optimization and develop a general framework for this optimization based on a generalization of classical partition function which we call "expected partition function". The basic idea is to start with a distribution over all possible candidate sequences, and extend the objective function from a sequence to a distribution. We then use gradient descent-based optimization methods to improve the extended objective function, and the distribution will gradually shrink towards a one-hot sequence (i.e., a single sequence). As a case study, we consider the important problem of mRNA design with wide applications in vaccines and therapeutics. While the recent work of LinearDesign can efficiently optimize mRNAs for minimum free energy (MFE), optimizing for ensemble free energy is much harder and likely intractable. Our approach can consistently improve over the LinearDesign solution in terms of ensemble free energy, with bigger improvements on longer sequences. |
q-bio/0610057 | Eben Kenah | Eben Kenah, James M. Robins | Second look at the spread of epidemics on networks | 29 pages, 5 figures | Physical Review E 76: 036113, September 2007 | 10.1103/PhysRevE.76.036113 | null | q-bio.QM cond-mat.stat-mech math.PR | null | In an important paper, M.E.J. Newman claimed that a general network-based
stochastic Susceptible-Infectious-Removed (SIR) epidemic model is isomorphic to
a bond percolation model, where the bonds are the edges of the contact network
and the bond occupation probability is equal to the marginal probability of
transmission from an infected node to a susceptible neighbor. In this paper, we
show that this isomorphism is incorrect and define a semi-directed random
network we call the epidemic percolation network that is exactly isomorphic to
the SIR epidemic model in any finite population. In the limit of a large
population, (i) the distribution of (self-limited) outbreak sizes is identical
to the size distribution of (small) out-components, (ii) the epidemic threshold
corresponds to the phase transition where a giant strongly-connected component
appears, (iii) the probability of a large epidemic is equal to the probability
that an initial infection occurs in the giant in-component, and (iv) the
relative final size of an epidemic is equal to the proportion of the network
contained in the giant out-component. For the SIR model considered by Newman,
we show that the epidemic percolation network predicts the same mean outbreak
size below the epidemic threshold, the same epidemic threshold, and the same
final size of an epidemic as the bond percolation model. However, the bond
percolation model fails to predict the correct outbreak size distribution and
probability of an epidemic when there is a nondegenerate infectious period
distribution. We confirm our findings by comparing predictions from percolation
networks and bond percolation models to the results of simulations. In an
appendix, we show that an isomorphism to an epidemic percolation network can be
defined for any time-homogeneous stochastic SIR model.
| [
{
"created": "Mon, 30 Oct 2006 16:57:04 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Feb 2007 04:51:45 GMT",
"version": "v2"
},
{
"created": "Thu, 29 Mar 2007 16:25:27 GMT",
"version": "v3"
},
{
"created": "Fri, 20 Jul 2007 20:38:13 GMT",
"version": "v4"
},
{
"cr... | 2023-10-24 | [
[
"Kenah",
"Eben",
""
],
[
"Robins",
"James M.",
""
]
] | In an important paper, M.E.J. Newman claimed that a general network-based stochastic Susceptible-Infectious-Removed (SIR) epidemic model is isomorphic to a bond percolation model, where the bonds are the edges of the contact network and the bond occupation probability is equal to the marginal probability of transmission from an infected node to a susceptible neighbor. In this paper, we show that this isomorphism is incorrect and define a semi-directed random network we call the epidemic percolation network that is exactly isomorphic to the SIR epidemic model in any finite population. In the limit of a large population, (i) the distribution of (self-limited) outbreak sizes is identical to the size distribution of (small) out-components, (ii) the epidemic threshold corresponds to the phase transition where a giant strongly-connected component appears, (iii) the probability of a large epidemic is equal to the probability that an initial infection occurs in the giant in-component, and (iv) the relative final size of an epidemic is equal to the proportion of the network contained in the giant out-component. For the SIR model considered by Newman, we show that the epidemic percolation network predicts the same mean outbreak size below the epidemic threshold, the same epidemic threshold, and the same final size of an epidemic as the bond percolation model. However, the bond percolation model fails to predict the correct outbreak size distribution and probability of an epidemic when there is a nondegenerate infectious period distribution. We confirm our findings by comparing predictions from percolation networks and bond percolation models to the results of simulations. In an appendix, we show that an isomorphism to an epidemic percolation network can be defined for any time-homogeneous stochastic SIR model. |
1801.01146 | David Baum | David A. Baum | The origin and early evolution of life in chemical complexity space | 23 pages, 7 figures, planned submission for publication | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Life can be viewed as a localized chemical system that sits on, or in the
basin of attraction of, a metastable dynamical attractor state that remains out
of equilibrium with the environment. Such a view of life allows that new living
states can arise through chance changes in local chemical concentration
(=mutations) that move points in space into the basin of attraction of a life
state - the attractor being an autocatalytic sets whose essential (=keystone)
species are produced at a higher rate than they are lost to the environment by
diffusion, such that growth in expected. This conception of life yields several
new insights and conjectures. (1) This framework suggests that the first new
life states to arise are likely at interfaces where the rate of diffusion of
keystone species is tied to a low-diffusion regime, while precursors and waste
products diffuse at a higher rate. (2) There are reasons to expect that once
the first life state arises, most likely on a mineral surface, additional
mutations will generate derived life states with which the original state will
compete. (3) I propose that in the resulting adaptive process there is a
general tendency for higher complexity life states (i.e., ones that are further
from being at equilibrium with the environment) to dominate a given mineral
surface. (4) The framework suggests a simple and predictable path by which
cells evolve and provides pointers on why such cells are likely to acquire
particulate inheritance. Overall, the dynamical systems theoretical framework
developed provides an integrated view of the origin and early evolution of life
and supports novel empirical approaches.
| [
{
"created": "Wed, 3 Jan 2018 19:19:47 GMT",
"version": "v1"
}
] | 2018-01-08 | [
[
"Baum",
"David A.",
""
]
] | Life can be viewed as a localized chemical system that sits on, or in the basin of attraction of, a metastable dynamical attractor state that remains out of equilibrium with the environment. Such a view of life allows that new living states can arise through chance changes in local chemical concentration (=mutations) that move points in space into the basin of attraction of a life state - the attractor being an autocatalytic sets whose essential (=keystone) species are produced at a higher rate than they are lost to the environment by diffusion, such that growth in expected. This conception of life yields several new insights and conjectures. (1) This framework suggests that the first new life states to arise are likely at interfaces where the rate of diffusion of keystone species is tied to a low-diffusion regime, while precursors and waste products diffuse at a higher rate. (2) There are reasons to expect that once the first life state arises, most likely on a mineral surface, additional mutations will generate derived life states with which the original state will compete. (3) I propose that in the resulting adaptive process there is a general tendency for higher complexity life states (i.e., ones that are further from being at equilibrium with the environment) to dominate a given mineral surface. (4) The framework suggests a simple and predictable path by which cells evolve and provides pointers on why such cells are likely to acquire particulate inheritance. Overall, the dynamical systems theoretical framework developed provides an integrated view of the origin and early evolution of life and supports novel empirical approaches. |
1210.2147 | Dal Young Kim Prof. | Seok-Hee Kim and Dal-Young Kim | Differences in the Brain Waves of 3D and 2.5D Motion Picture Viewers | 10 pages, 1 figure, and 2 tables | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We measured brain waves of viewers watching the 2D, 2.5D, and 3D motion
pictures, comparing them with one another. The relative intensity of
{\alpha}-frequency band of 2.5D-viewer was lower than that of 2D-viewer, while
that of 3D-viewer remained with similar intensity. This result implies visual
neuro-processing of the 2.5D-viewer differs from that of the 3D-viewer.
| [
{
"created": "Mon, 8 Oct 2012 05:20:22 GMT",
"version": "v1"
}
] | 2012-10-09 | [
[
"Kim",
"Seok-Hee",
""
],
[
"Kim",
"Dal-Young",
""
]
] | We measured brain waves of viewers watching the 2D, 2.5D, and 3D motion pictures, comparing them with one another. The relative intensity of {\alpha}-frequency band of 2.5D-viewer was lower than that of 2D-viewer, while that of 3D-viewer remained with similar intensity. This result implies visual neuro-processing of the 2.5D-viewer differs from that of the 3D-viewer. |
1706.01757 | Max Losch | H. Steven Scholte, Max M. Losch, Kandan Ramakrishnan, Edward H.F. de
Haan, Sander M. Bohte | Visual pathways from the perspective of cost functions and multi-task
deep neural networks | 16 pages, 5 figures | null | 10.1016/j.cortex.2017.09.019 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision research has been shaped by the seminal insight that we can understand
the higher-tier visual cortex from the perspective of multiple functional
pathways with different goals. In this paper, we try to give a computational
account of the functional organization of this system by reasoning from the
perspective of multi-task deep neural networks. Machine learning has shown that
tasks become easier to solve when they are decomposed into subtasks with their
own cost function. We hypothesize that the visual system optimizes multiple
cost functions of unrelated tasks and this causes the emergence of a ventral
pathway dedicated to vision for perception, and a dorsal pathway dedicated to
vision for action. To evaluate the functional organization in multi-task deep
neural networks, we propose a method that measures the contribution of a unit
towards each task, applying it to two networks that have been trained on either
two related or two unrelated tasks, using an identical stimulus set. Results
show that the network trained on the unrelated tasks shows a decreasing degree
of feature representation sharing towards higher-tier layers while the network
trained on related tasks uniformly shows high degree of sharing. We conjecture
that the method we propose can be used to analyze the anatomical and functional
organization of the visual system and beyond. We predict that the degree to
which tasks are related is a good descriptor of the degree to which they share
downstream cortical-units.
| [
{
"created": "Tue, 6 Jun 2017 13:36:31 GMT",
"version": "v1"
},
{
"created": "Sat, 16 Sep 2017 10:37:24 GMT",
"version": "v2"
}
] | 2017-10-16 | [
[
"Scholte",
"H. Steven",
""
],
[
"Losch",
"Max M.",
""
],
[
"Ramakrishnan",
"Kandan",
""
],
[
"de Haan",
"Edward H. F.",
""
],
[
"Bohte",
"Sander M.",
""
]
] | Vision research has been shaped by the seminal insight that we can understand the higher-tier visual cortex from the perspective of multiple functional pathways with different goals. In this paper, we try to give a computational account of the functional organization of this system by reasoning from the perspective of multi-task deep neural networks. Machine learning has shown that tasks become easier to solve when they are decomposed into subtasks with their own cost function. We hypothesize that the visual system optimizes multiple cost functions of unrelated tasks and this causes the emergence of a ventral pathway dedicated to vision for perception, and a dorsal pathway dedicated to vision for action. To evaluate the functional organization in multi-task deep neural networks, we propose a method that measures the contribution of a unit towards each task, applying it to two networks that have been trained on either two related or two unrelated tasks, using an identical stimulus set. Results show that the network trained on the unrelated tasks shows a decreasing degree of feature representation sharing towards higher-tier layers while the network trained on related tasks uniformly shows high degree of sharing. We conjecture that the method we propose can be used to analyze the anatomical and functional organization of the visual system and beyond. We predict that the degree to which tasks are related is a good descriptor of the degree to which they share downstream cortical-units. |
1811.01793 | William Waites | William Waites, Goksel Misirli, Matteo Cavaliere, Vincent Danos, Anil
Wipat | Compiling Combinatorial Genetic Circuits with Semantic Inference | null | null | 10.1021/acssynbio.8b00201 | null | q-bio.QM cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A central strategy of synthetic biology is to understand the basic processes
of living creatures through engineering organisms using the same building
blocks. Biological machines described in terms of parts can be studied by
computer simulation in any of several languages or robotically assembled in
vitro. In this paper we present a language, the Genetic Circuit Description
Language (GCDL) and a compiler, the Genetic Circuit Compiler (GCC). This
language describes genetic circuits at a level of granularity appropriate both
for automated assembly in the laboratory and deriving simulation code. The GCDL
follows Semantic Web practice and the compiler makes novel use of the logical
inference facilities that are therefore available. We present the GCDL and
compiler structure as a study of a tool for generating $\kappa$-language
simulations from semantic descriptions of genetic circuits.
| [
{
"created": "Mon, 5 Nov 2018 15:31:17 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Nov 2018 11:40:13 GMT",
"version": "v2"
}
] | 2020-06-24 | [
[
"Waites",
"William",
""
],
[
"Misirli",
"Goksel",
""
],
[
"Cavaliere",
"Matteo",
""
],
[
"Danos",
"Vincent",
""
],
[
"Wipat",
"Anil",
""
]
] | A central strategy of synthetic biology is to understand the basic processes of living creatures through engineering organisms using the same building blocks. Biological machines described in terms of parts can be studied by computer simulation in any of several languages or robotically assembled in vitro. In this paper we present a language, the Genetic Circuit Description Language (GCDL) and a compiler, the Genetic Circuit Compiler (GCC). This language describes genetic circuits at a level of granularity appropriate both for automated assembly in the laboratory and deriving simulation code. The GCDL follows Semantic Web practice and the compiler makes novel use of the logical inference facilities that are therefore available. We present the GCDL and compiler structure as a study of a tool for generating $\kappa$-language simulations from semantic descriptions of genetic circuits. |
1401.1790 | Padmini Rangamani | Padmini Rangamani, Kranthi Kiran Mandadapu and George Oster | Protein-induced membrane curvature changes membrane tension | 15 pages, 5 figures | null | 10.1016/j.bpj.2014.06.010 | null | q-bio.CB cond-mat.soft cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adsorption of proteins onto membranes can alter the local membrane curvature.
This phenomenon has been observed in biological processes such as endocytosis,
tubulation and vesiculation. However, it is not clear how the local surface
properties of the membrane, such as membrane tension, change in response to
protein adsorption. In this paper, we show that the classical elastic model of
lipid membranes cannot account for simultaneous changes in shape and membrane
tension due to protein adsorption in a local region, and a viscous-elastic
formulation is necessary to fully describe the system. Therefore, we develop a
viscous-elastic model for inhomogeneous membranes of the Helfrich type. Using
the new viscous-elastic model, we find that the lipids flow to accommodate
changes in membrane curvature during protein adsorption. We show that, at the
end of protein adsorption process, the system sustains a residual local tension
to balance the difference between the actual mean curvature and the imposed
spontaneous curvatures. This change in membrane tension can have a functional
impact in many biological phenomena where proteins interact with membranes.
| [
{
"created": "Wed, 8 Jan 2014 19:49:38 GMT",
"version": "v1"
}
] | 2016-04-28 | [
[
"Rangamani",
"Padmini",
""
],
[
"Mandadapu",
"Kranthi Kiran",
""
],
[
"Oster",
"George",
""
]
] | Adsorption of proteins onto membranes can alter the local membrane curvature. This phenomenon has been observed in biological processes such as endocytosis, tubulation and vesiculation. However, it is not clear how the local surface properties of the membrane, such as membrane tension, change in response to protein adsorption. In this paper, we show that the classical elastic model of lipid membranes cannot account for simultaneous changes in shape and membrane tension due to protein adsorption in a local region, and a viscous-elastic formulation is necessary to fully describe the system. Therefore, we develop a viscous-elastic model for inhomogeneous membranes of the Helfrich type. Using the new viscous-elastic model, we find that the lipids flow to accommodate changes in membrane curvature during protein adsorption. We show that, at the end of protein adsorption process, the system sustains a residual local tension to balance the difference between the actual mean curvature and the imposed spontaneous curvatures. This change in membrane tension can have a functional impact in many biological phenomena where proteins interact with membranes. |
1509.06511 | Giovanni Bussi | Petr Stadlbauer, Petra K\"uhrov\'a, Pavel Ban\'a\v{s}, Jaroslav
Ko\v{c}a, Giovanni Bussi, Luk\'a\v{s} Trant\'irek, Michal Otyepka, Ji\v{r}\'i
\v{S}poner | Hairpins Participating in Folding of Human Telomeric Sequence
Quadruplexes Studied by Standard and T-REMD Simulations | This article has been accepted for publication in Nucleic Acids
Research Published by Oxford University Press | Nucleic Acids Res. 43, 9626 (2015) | 10.1093/nar/gkv994 | null | q-bio.BM physics.bio-ph physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | DNA G-hairpins are potential key structures participating in folding of human
telomeric guanine quadruplexes (GQ). We examined their properties by standard
MD simulations starting from the folded state and long T-REMD starting from the
unfolded state, accumulating ~130 {\mu}s of atomistic simulations. Antiparallel
G-hairpins should spontaneously form in all stages of the folding to support
lateral and diagonal loops, with sub-{\mu}s scale rearrangements between them.
We found no clear predisposition for direct folding into specific GQ topologies
with specific syn/anti patterns. Our key prediction stemming from the T-REMD is
that an ideal unfolded ensemble of the full GQ sequence populates all 4096
syn/anti combinations of its four G-stretches. The simulations can propose
idealized folding pathways but we explain that such few-state pathways may be
misleading. In the context of the available experimental data, the simulations
strongly suggest that the GQ folding could be best understood by the kinetic
partitioning mechanism with a set of deep competing minima on the folding
landscape, with only a small fraction of molecules directly folding to the
native fold. The landscape should further include nonspecific collapse
processes where the molecules move via diffusion and consecutive random rare
transitions, which could, e.g., structure the propeller loops.
| [
{
"created": "Tue, 22 Sep 2015 08:54:38 GMT",
"version": "v1"
}
] | 2016-01-06 | [
[
"Stadlbauer",
"Petr",
""
],
[
"Kührová",
"Petra",
""
],
[
"Banáš",
"Pavel",
""
],
[
"Koča",
"Jaroslav",
""
],
[
"Bussi",
"Giovanni",
""
],
[
"Trantírek",
"Lukáš",
""
],
[
"Otyepka",
"Michal",
""
],
[
... | DNA G-hairpins are potential key structures participating in folding of human telomeric guanine quadruplexes (GQ). We examined their properties by standard MD simulations starting from the folded state and long T-REMD starting from the unfolded state, accumulating ~130 {\mu}s of atomistic simulations. Antiparallel G-hairpins should spontaneously form in all stages of the folding to support lateral and diagonal loops, with sub-{\mu}s scale rearrangements between them. We found no clear predisposition for direct folding into specific GQ topologies with specific syn/anti patterns. Our key prediction stemming from the T-REMD is that an ideal unfolded ensemble of the full GQ sequence populates all 4096 syn/anti combinations of its four G-stretches. The simulations can propose idealized folding pathways but we explain that such few-state pathways may be misleading. In the context of the available experimental data, the simulations strongly suggest that the GQ folding could be best understood by the kinetic partitioning mechanism with a set of deep competing minima on the folding landscape, with only a small fraction of molecules directly folding to the native fold. The landscape should further include nonspecific collapse processes where the molecules move via diffusion and consecutive random rare transitions, which could, e.g., structure the propeller loops. |
1703.07584 | Mariya Ptashnyk | Henry R. Allen and Mariya Ptashnyk | Mathematical Modelling and Analysis of the Brassinosteroid and
Gibberellin Signalling Pathways and their Interactions | Journal of Theoretical Biology 2017 | null | null | null | q-bio.SC math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The plant hormones brassinosteroid (BR) and gibberellin (GA) have important
roles in a wide range of processes involved in plant growth and development. In
this paper we derive and analyse new mathematical models for the BR signalling
pathway and for the crosstalk between the BR and GA signalling pathways. To
analyse the effects of spatial heterogeneity of the signalling processes, along
with spatially-homogeneous ODE models we derive coupled PDE-ODE systems
modelling the temporal and spatial dynamics of molecules involved in the
signalling pathways. The values of the parameters in the model for the BR
signalling pathway are determined using experimental data on the gene
expression of BR biosynthetic enzymes. The stability of steady state solutions
of our mathematical model, shown for a wide range of parameters, can be related
to the BR homeostasis. Our results for the crosstalk model suggest that the
interaction between transcription factors BZR and DELLA exerts more influence
on the dynamics of the signalling pathways than BZR-mediated biosynthesis of
GA, suggesting that the interaction between transcription factors may
constitute the principal mechanism of the crosstalk between the BR and GA
signalling pathways. In general, perturbations in the GA signalling pathway
have larger effects on the dynamics of components of the BR signalling pathway
than perturbations in the BR signalling pathway on the dynamics of the GA
pathway. The perturbation in the crosstalk mechanism also has a larger effect
on the dynamics of the BR pathway than of the GA pathway. Large changes in the
dynamics of the GA signalling processes can be observed only in the case where
there are disturbances in both the BR signalling pathway and the crosstalk
mechanism.
| [
{
"created": "Wed, 22 Mar 2017 09:54:36 GMT",
"version": "v1"
},
{
"created": "Sat, 19 Aug 2017 16:57:29 GMT",
"version": "v2"
}
] | 2017-08-22 | [
[
"Allen",
"Henry R.",
""
],
[
"Ptashnyk",
"Mariya",
""
]
] | The plant hormones brassinosteroid (BR) and gibberellin (GA) have important roles in a wide range of processes involved in plant growth and development. In this paper we derive and analyse new mathematical models for the BR signalling pathway and for the crosstalk between the BR and GA signalling pathways. To analyse the effects of spatial heterogeneity of the signalling processes, along with spatially-homogeneous ODE models we derive coupled PDE-ODE systems modelling the temporal and spatial dynamics of molecules involved in the signalling pathways. The values of the parameters in the model for the BR signalling pathway are determined using experimental data on the gene expression of BR biosynthetic enzymes. The stability of steady state solutions of our mathematical model, shown for a wide range of parameters, can be related to the BR homeostasis. Our results for the crosstalk model suggest that the interaction between transcription factors BZR and DELLA exerts more influence on the dynamics of the signalling pathways than BZR-mediated biosynthesis of GA, suggesting that the interaction between transcription factors may constitute the principal mechanism of the crosstalk between the BR and GA signalling pathways. In general, perturbations in the GA signalling pathway have larger effects on the dynamics of components of the BR signalling pathway than perturbations in the BR signalling pathway on the dynamics of the GA pathway. The perturbation in the crosstalk mechanism also has a larger effect on the dynamics of the BR pathway than of the GA pathway. Large changes in the dynamics of the GA signalling processes can be observed only in the case where there are disturbances in both the BR signalling pathway and the crosstalk mechanism. |
1303.6382 | Jin Yang | Jin Yang, John W. Clark Jr., Robert M. Bryan, Claudia S. Robertson | The Myogenic Response in Isolated Rat Cerebrovascular Arteries: Smooth
Muscle Cell Model | 20 pages, 14 figures; updates in parameter values; minor changes in
text | Medical Engineering & Physics, 25(8):691-709, 2003 | 10.1016/S1350-4533(03)00100-0 | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Previous models of the cerebrovascular smooth muscle cell have not addressed
the interaction between the electrical, chemical and mechanical components of
cell function during the development of active tension. These models are
primarily electrical, biochemical or mechanical in their orientation, and do
not permit a full exploration of how the smooth muscle responds to electrical
or mechanical forcing. To address this issue, we have developed a new model
that consists of two major components: electrochemical and chemomechanical
subsystems of the cell. Included in the electrochemical model are models of the
electrophysiological behavior of the cell membrane, fluid compartments, Ca2+
release and uptake by the sarcoplasmic reticulum, and cytosolic Ca2+ buffering,
particularly by calmodulin. With this subsystem model, we can study the
mechanics of the production of intracellular Ca2+ transient in response to
membrane voltage clamp pulses. The chemomechanical model includes models of:
(a) the chemical kinetics of myosin phosphorylation, and the formation of
phosphorylated myosin cross-bridges with actin, as well as, attached latch-type
cross-bridges; and (b) a model of force generation and mechanical coupling to
the contractile filaments and their attachments to protein structures and the
skeletal framework of the cell. The two subsystem models are tested
independently and compared with data. Likewise, the complete (combined) cell
model responses to voltage pulse stimulation under isometric and isotonic
conditions are calculated and compared with measured single cell length-force
and force-velocity data obtained from literature. This integrated cell model
provides biophysically-based explanations of electrical, chemical and
mechanical phenomena in cerebrovascular smooth muscle, and has considerable
utility as an adjunct to laboratory research and experimental design.
| [
{
"created": "Tue, 26 Mar 2013 04:36:52 GMT",
"version": "v1"
}
] | 2013-03-27 | [
[
"Yang",
"Jin",
""
],
[
"Clark",
"John W.",
"Jr."
],
[
"Bryan",
"Robert M.",
""
],
[
"Robertson",
"Claudia S.",
""
]
] | Previous models of the cerebrovascular smooth muscle cell have not addressed the interaction between the electrical, chemical and mechanical components of cell function during the development of active tension. These models are primarily electrical, biochemical or mechanical in their orientation, and do not permit a full exploration of how the smooth muscle responds to electrical or mechanical forcing. To address this issue, we have developed a new model that consists of two major components: electrochemical and chemomechanical subsystems of the cell. Included in the electrochemical model are models of the electrophysiological behavior of the cell membrane, fluid compartments, Ca2+ release and uptake by the sarcoplasmic reticulum, and cytosolic Ca2+ buffering, particularly by calmodulin. With this subsystem model, we can study the mechanics of the production of intracellular Ca2+ transient in response to membrane voltage clamp pulses. The chemomechanical model includes models of: (a) the chemical kinetics of myosin phosphorylation, and the formation of phosphorylated myosin cross-bridges with actin, as well as, attached latch-type cross-bridges; and (b) a model of force generation and mechanical coupling to the contractile filaments and their attachments to protein structures and the skeletal framework of the cell. The two subsystem models are tested independently and compared with data. Likewise, the complete (combined) cell model responses to voltage pulse stimulation under isometric and isotonic conditions are calculated and compared with measured single cell length-force and force-velocity data obtained from literature. This integrated cell model provides biophysically-based explanations of electrical, chemical and mechanical phenomena in cerebrovascular smooth muscle, and has considerable utility as an adjunct to laboratory research and experimental design. |
1103.1612 | Mar\'ia Vela | M. Vela-P\'erez, M. A. Fontelos and J. J. L. Vel\'azquez | Ant foraging and minimal paths in simple graphs | 39 pages, 13 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ants are known to be able to find paths of minimal length between the nest
and food sources. The deposit of pheromones while they search for food and
their chemotactical response to them has been proposed as a crucial element in
the mechanism for finding minimal paths. We investigate both individual and
collective behavior of ants in some simple networks representing basic mazes.
The character of the graphs considered is such that it allows a fully rigorous
mathematical treatment via analysis of some markovian processes in terms of
which the evolution can be represented. Our analytical and computational
results show that in order for the ants to follow shortest paths between nest
and food, it is necessary to superimpose to the ants' random walk the
chemotactic reinforcement. It is also needed a certain degree of persistence so
that ants tend to move preferably without changing their direction much. It is
also important the number of ants, since we will show that the speed for
finding minimal paths increases very fast with it.
| [
{
"created": "Tue, 8 Mar 2011 19:46:05 GMT",
"version": "v1"
}
] | 2011-03-09 | [
[
"Vela-Pérez",
"M.",
""
],
[
"Fontelos",
"M. A.",
""
],
[
"Velázquez",
"J. J. L.",
""
]
] | Ants are known to be able to find paths of minimal length between the nest and food sources. The deposit of pheromones while they search for food and their chemotactical response to them has been proposed as a crucial element in the mechanism for finding minimal paths. We investigate both individual and collective behavior of ants in some simple networks representing basic mazes. The character of the graphs considered is such that it allows a fully rigorous mathematical treatment via analysis of some markovian processes in terms of which the evolution can be represented. Our analytical and computational results show that in order for the ants to follow shortest paths between nest and food, it is necessary to superimpose to the ants' random walk the chemotactic reinforcement. It is also needed a certain degree of persistence so that ants tend to move preferably without changing their direction much. It is also important the number of ants, since we will show that the speed for finding minimal paths increases very fast with it. |
1405.4835 | Alexander S. Serov | A.S. Serov, C. Salafia, P. Brownbill, D.S. Grebenkov and M. Filoche | Optimal villi density for maximal oxygen uptake in the human placenta | null | J. Theor. Biol. 364 (2015), 383-96 | 10.1016/j.jtbi.2014.09.022 | null | q-bio.QM q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a stream-tube model of oxygen exchange inside a human placenta
functional unit (a placentone). The effect of villi density on oxygen transfer
efficiency is assessed by numerically solving the diffusion-convection equation
in a 2D+1D geometry for a wide range of villi densities. For each set of
physiological parameters, we observe the existence of an optimal villi density
providing a maximal oxygen uptake as a trade-off between the incoming oxygen
flow and the absorbing villus surface. The predicted optimal villi density
$0.47\pm0.06$ is compatible to previous experimental measurements. Several
other ways to experimentally validate the model are also proposed. The proposed
stream-tube model can serve as a basis for analyzing the efficiency of human
placentas, detecting possible pathologies and diagnosing placental health risks
for newborns by using routine histology sections collected after birth.
| [
{
"created": "Wed, 23 Apr 2014 11:59:13 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Oct 2014 16:04:31 GMT",
"version": "v2"
},
{
"created": "Thu, 30 Oct 2014 18:42:50 GMT",
"version": "v3"
},
{
"created": "Thu, 1 Jan 2015 20:20:01 GMT",
"version": "v4"
}
] | 2015-01-05 | [
[
"Serov",
"A. S.",
""
],
[
"Salafia",
"C.",
""
],
[
"Brownbill",
"P.",
""
],
[
"Grebenkov",
"D. S.",
""
],
[
"Filoche",
"M.",
""
]
] | We present a stream-tube model of oxygen exchange inside a human placenta functional unit (a placentone). The effect of villi density on oxygen transfer efficiency is assessed by numerically solving the diffusion-convection equation in a 2D+1D geometry for a wide range of villi densities. For each set of physiological parameters, we observe the existence of an optimal villi density providing a maximal oxygen uptake as a trade-off between the incoming oxygen flow and the absorbing villus surface. The predicted optimal villi density $0.47\pm0.06$ is compatible to previous experimental measurements. Several other ways to experimentally validate the model are also proposed. The proposed stream-tube model can serve as a basis for analyzing the efficiency of human placentas, detecting possible pathologies and diagnosing placental health risks for newborns by using routine histology sections collected after birth. |
1605.09328 | William Ott | Alan Veliz-Cuba, Chinmaya Gupta, Matthew R. Bennett, Kre\v{s}imir
Josi\'c, William Ott | Effects of cell cycle noise on excitable gene circuits | 15 pages, 8 figures | null | 10.1088/1478-3975/13/6/066007 | null | q-bio.MN physics.bio-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We assess the impact of cell cycle noise on gene circuit dynamics. For
bistable genetic switches and excitable circuits, we find that transitions
between metastable states most likely occur just after cell division and that
this concentration effect intensifies in the presence of transcriptional delay.
We explain this concentration effect with a 3-states stochastic model. For
genetic oscillators, we quantify the temporal correlations between daughter
cells induced by cell division. Temporal correlations must be captured properly
in order to accurately quantify noise sources within gene networks.
| [
{
"created": "Mon, 30 May 2016 17:06:41 GMT",
"version": "v1"
}
] | 2016-12-21 | [
[
"Veliz-Cuba",
"Alan",
""
],
[
"Gupta",
"Chinmaya",
""
],
[
"Bennett",
"Matthew R.",
""
],
[
"Josić",
"Krešimir",
""
],
[
"Ott",
"William",
""
]
] | We assess the impact of cell cycle noise on gene circuit dynamics. For bistable genetic switches and excitable circuits, we find that transitions between metastable states most likely occur just after cell division and that this concentration effect intensifies in the presence of transcriptional delay. We explain this concentration effect with a 3-states stochastic model. For genetic oscillators, we quantify the temporal correlations between daughter cells induced by cell division. Temporal correlations must be captured properly in order to accurately quantify noise sources within gene networks. |
2211.13704 | Christopher Banks | Christopher J. Banks, Ewan Colman, Anthony Wood, Thomas Doherty,
Rowland R. Kao | Modelling plausible scenarios for the Omicron SARS-CoV-2 variant from
early-stage surveillance | null | null | null | null | q-bio.PE physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | In this paper we used an adapted version of an existing simulation model of
SARS-CoV-2 transmission in Scotland to investigate the rise of the Omicron
variant of concern, in order to evaluate plausible scenarios for transmission
advantage and vaccine immune escape relative to the Delta variant. We also
explored possible outcomes of different levels of imposed non-pharmaceutical
intervention. The initial results of these scenarios were used to inform the
Scottish Government in the early outbreak stages of the Omicron variant.
We use an explicitly spatial agent-based simulation model combined with
spatially fine-grained COVID-19 observation data from Public Health Scotland.
Using the model with parameters fit over the Delta variant epidemic, some
initial assumptions about Omicron transmission advantage and vaccine escape,
and a simple growth rate fitting procedure, we were able to capture the initial
outbreak dynamics for Omicron. We also find the modelled dynamics hold up to
retrospective scrutiny.
We found that the modelled imposition of extra non-pharmaceutical
interventions planned by the Scottish Government at the time would likely have
little effect in light of the transmission advantage held by the Omicron
variant and the fact that the planned interventions would have occurred too
late in the outbreak's trajectory. Finally, we found that any assumptions made
about the projected distribution of vaccines in the model population had little
bearing on the outcome, in terms of outbreak size and timing, rather that the
detailed landscape of immunity prior to the outbreak was of far greater
importance.
| [
{
"created": "Thu, 24 Nov 2022 16:41:53 GMT",
"version": "v1"
}
] | 2022-11-28 | [
[
"Banks",
"Christopher J.",
""
],
[
"Colman",
"Ewan",
""
],
[
"Wood",
"Anthony",
""
],
[
"Doherty",
"Thomas",
""
],
[
"Kao",
"Rowland R.",
""
]
] | In this paper we used an adapted version of an existing simulation model of SARS-CoV-2 transmission in Scotland to investigate the rise of the Omicron variant of concern, in order to evaluate plausible scenarios for transmission advantage and vaccine immune escape relative to the Delta variant. We also explored possible outcomes of different levels of imposed non-pharmaceutical intervention. The initial results of these scenarios were used to inform the Scottish Government in the early outbreak stages of the Omicron variant. We use an explicitly spatial agent-based simulation model combined with spatially fine-grained COVID-19 observation data from Public Health Scotland. Using the model with parameters fit over the Delta variant epidemic, some initial assumptions about Omicron transmission advantage and vaccine escape, and a simple growth rate fitting procedure, we were able to capture the initial outbreak dynamics for Omicron. We also find the modelled dynamics hold up to retrospective scrutiny. We found that the modelled imposition of extra non-pharmaceutical interventions planned by the Scottish Government at the time would likely have little effect in light of the transmission advantage held by the Omicron variant and the fact that the planned interventions would have occurred too late in the outbreak's trajectory. Finally, we found that any assumptions made about the projected distribution of vaccines in the model population had little bearing on the outcome, in terms of outbreak size and timing, rather that the detailed landscape of immunity prior to the outbreak was of far greater importance. |
1705.08031 | Alireza Alemi | Sophie Den\`eve, Alireza Alemi, Ralph Bourdoukan | The brain as an efficient and robust adaptive learner | In press in Neuron journal | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding how the brain learns to compute functions reliably, efficiently
and robustly with noisy spiking activity is a fundamental challenge in
neuroscience. Most sensory and motor tasks can be described as dynamical
systems and could presumably be learned by adjusting connection weights in a
recurrent biological neural network. However, this is greatly complicated by
the credit assignment problem for learning in recurrent network, e.g. the
contribution of each connection to the global output error cannot be determined
based only on locally accessible quantities to the synapse. Combining tools
from adaptive control theory and efficient coding theories, we propose that
neural circuits can indeed learn complex dynamic tasks with local synaptic
plasticity rules as long as they associate two experimentally established
neural mechanisms. First, they should receive top-down feedbacks driving both
their activity and their synaptic plasticity. Second, inhibitory interneurons
should maintain a tight balance between excitation and inhibition in the
circuit. The resulting networks could learn arbitrary dynamical systems and
produce irregular spike trains as variable as those observed experimentally.
Yet, this variability in single neurons may hide an extremely efficient and
robust computation at the population level.
| [
{
"created": "Mon, 22 May 2017 22:36:10 GMT",
"version": "v1"
}
] | 2017-05-24 | [
[
"Denève",
"Sophie",
""
],
[
"Alemi",
"Alireza",
""
],
[
"Bourdoukan",
"Ralph",
""
]
] | Understanding how the brain learns to compute functions reliably, efficiently and robustly with noisy spiking activity is a fundamental challenge in neuroscience. Most sensory and motor tasks can be described as dynamical systems and could presumably be learned by adjusting connection weights in a recurrent biological neural network. However, this is greatly complicated by the credit assignment problem for learning in recurrent network, e.g. the contribution of each connection to the global output error cannot be determined based only on locally accessible quantities to the synapse. Combining tools from adaptive control theory and efficient coding theories, we propose that neural circuits can indeed learn complex dynamic tasks with local synaptic plasticity rules as long as they associate two experimentally established neural mechanisms. First, they should receive top-down feedbacks driving both their activity and their synaptic plasticity. Second, inhibitory interneurons should maintain a tight balance between excitation and inhibition in the circuit. The resulting networks could learn arbitrary dynamical systems and produce irregular spike trains as variable as those observed experimentally. Yet, this variability in single neurons may hide an extremely efficient and robust computation at the population level. |
1409.2071 | Christian Mayr | Christian Mayr | Untersuchungen zur Modellierung und Schaltungsrealisierung von
synaptischer Plastizitaet | Habilitation thesis, in German, with English preface | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This manuscript deals with the analysis and VLSI implementation of adaptive
information processing derived from biological measurements. Specifically,
models for short term plasticity, long term plasticity and metaplasticity are
derived from biological measurements and implemented in CMOS circuits.
| [
{
"created": "Sun, 7 Sep 2014 01:05:43 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Dec 2014 10:50:14 GMT",
"version": "v2"
}
] | 2014-12-11 | [
[
"Mayr",
"Christian",
""
]
] | This manuscript deals with the analysis and VLSI implementation of adaptive information processing derived from biological measurements. Specifically, models for short term plasticity, long term plasticity and metaplasticity are derived from biological measurements and implemented in CMOS circuits. |
1604.03199 | William Gray Roncal | William Gray Roncal, Eva L Dyer, Doga G\"ursoy, Konrad Kording,
Narayanan Kasthuri | From sample to knowledge: Towards an integrated approach for
neuroscience discovery | first two authors contributed equally. 8 pages, 2 figures. v2: added
acknowledgments | null | null | null | q-bio.QM cs.SY q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Imaging methods used in modern neuroscience experiments are quickly producing
large amounts of data capable of providing increasing amounts of knowledge
about neuroanatomy and function. A great deal of information in these datasets
is relatively unexplored and untapped. One of the bottlenecks in knowledge
extraction is that often there is no feedback loop between the knowledge
produced (e.g., graph, density estimate, or other statistic) and the earlier
stages of the pipeline (e.g., acquisition). We thus advocate for the
development of sample-to-knowledge discovery pipelines that one can use to
optimize acquisition and processing steps with a particular end goal (i.e.,
piece of knowledge) in mind. We therefore propose that optimization takes place
not just within each processing stage but also between adjacent (and
non-adjacent) steps of the pipeline. Furthermore, we explore the existing
categories of knowledge representation and models to motivate the types of
experiments and analysis needed to achieve the ultimate goal. To illustrate
this approach, we provide an experimental paradigm to answer questions about
large-scale synaptic distributions through a multimodal approach combining
X-ray microtomography and electron microscopy.
| [
{
"created": "Tue, 12 Apr 2016 01:41:48 GMT",
"version": "v1"
},
{
"created": "Mon, 23 Jan 2017 19:30:41 GMT",
"version": "v2"
}
] | 2017-01-25 | [
[
"Roncal",
"William Gray",
""
],
[
"Dyer",
"Eva L",
""
],
[
"Gürsoy",
"Doga",
""
],
[
"Kording",
"Konrad",
""
],
[
"Kasthuri",
"Narayanan",
""
]
] | Imaging methods used in modern neuroscience experiments are quickly producing large amounts of data capable of providing increasing amounts of knowledge about neuroanatomy and function. A great deal of information in these datasets is relatively unexplored and untapped. One of the bottlenecks in knowledge extraction is that often there is no feedback loop between the knowledge produced (e.g., graph, density estimate, or other statistic) and the earlier stages of the pipeline (e.g., acquisition). We thus advocate for the development of sample-to-knowledge discovery pipelines that one can use to optimize acquisition and processing steps with a particular end goal (i.e., piece of knowledge) in mind. We therefore propose that optimization takes place not just within each processing stage but also between adjacent (and non-adjacent) steps of the pipeline. Furthermore, we explore the existing categories of knowledge representation and models to motivate the types of experiments and analysis needed to achieve the ultimate goal. To illustrate this approach, we provide an experimental paradigm to answer questions about large-scale synaptic distributions through a multimodal approach combining X-ray microtomography and electron microscopy. |
1412.1746 | Momiao Xiong | Lerong Li, Momiao Xiong | Dynamic Model for RNA-seq Data Analysis | null | null | null | null | q-bio.GN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The newly developed deep-sequencing technologies make it possible to acquire
both quantitative and qualitative information regarding transcript biology. By
measuring messenger RNA levels for all genes in a sample, RNA-seq provides an
attractive option to characterize the global changes in transcription. RNA-seq
is becoming the widely used platform for gene expression profiling. However,
real transcription signals in the RNA-seq data are confounded with measurement
and sequencing errors, and other random biological/technical variation. How to
appropriately take the variability due to errors and sequencing technology
variation into account is essential issue in the RNA-seq data analysis. To
extract biologically useful transcription process from the RNA-seq data, we
propose to use the second ODE for modeling the RNA-seq data. We use
differential principal analysis to develop statistical methods for estimation
of location-varying coefficients of the ODE. We validate the accuracy of the
ODE model to fit the RNA-seq data by prediction analysis and 5 fold cross
validation. We find the accuracy of the second ODE model to predict the gene
expression level across the gene is very high and the second ODE model to fit
the RNA-seq data very well. To further evaluate the performance of the ODE
model for RNA-seq data analysis, we used the location-varying coefficients of
the second ODE as features to classify the normal and tumor cells. We
demonstrate that even using the ODE model for single gene we can achieve high
classification accuracy. We also conduct response analysis to investigate how
the transcription process respond to the perturbation of the external signals
and identify dozens of genes that are related to cancer.
| [
{
"created": "Thu, 4 Dec 2014 17:58:00 GMT",
"version": "v1"
}
] | 2014-12-05 | [
[
"Li",
"Lerong",
""
],
[
"Xiong",
"Momiao",
""
]
] | The newly developed deep-sequencing technologies make it possible to acquire both quantitative and qualitative information regarding transcript biology. By measuring messenger RNA levels for all genes in a sample, RNA-seq provides an attractive option to characterize the global changes in transcription. RNA-seq is becoming the widely used platform for gene expression profiling. However, real transcription signals in the RNA-seq data are confounded with measurement and sequencing errors, and other random biological/technical variation. How to appropriately take the variability due to errors and sequencing technology variation into account is essential issue in the RNA-seq data analysis. To extract biologically useful transcription process from the RNA-seq data, we propose to use the second ODE for modeling the RNA-seq data. We use differential principal analysis to develop statistical methods for estimation of location-varying coefficients of the ODE. We validate the accuracy of the ODE model to fit the RNA-seq data by prediction analysis and 5 fold cross validation. We find the accuracy of the second ODE model to predict the gene expression level across the gene is very high and the second ODE model to fit the RNA-seq data very well. To further evaluate the performance of the ODE model for RNA-seq data analysis, we used the location-varying coefficients of the second ODE as features to classify the normal and tumor cells. We demonstrate that even using the ODE model for single gene we can achieve high classification accuracy. We also conduct response analysis to investigate how the transcription process respond to the perturbation of the external signals and identify dozens of genes that are related to cancer. |
1907.00689 | Soaad Hossain Mr | Soaad Hossain | Application and Computation of Probabilistic Neural Plasticity | 10 pages, submitted to Frontiers in Human Neuroscience | null | null | null | q-bio.NC cs.CE cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The discovery of neural plasticity has proved that throughout the life of a
human being, the brain reorganizes itself through forming new neural
connections. The formation of new neural connections are achieved through the
brain's effort to adapt to new environments or to changes in the existing
environment. Despite the realization of neural plasticity, there is a lack of
understanding the probability of neural plasticity occurring given some event.
Using ordinary differential equations, neural firing equations and spike-train
statistics, we show how an additive short-term memory (STM) equation can be
formulated to approach the computation of neural plasticity. We then show how
the additive STM equation can be used for probabilistic inference in computable
neural plasticity, and the computation of probabilistic neural plasticity. We
will also provide a brief introduction to the theory of probabilistic neural
plasticity and conclude with showing how it can be applied to multiple
disciplines such as behavioural science, machine learning, artificial
intelligence and psychiatry.
| [
{
"created": "Sat, 25 May 2019 07:03:56 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Aug 2020 01:23:53 GMT",
"version": "v2"
}
] | 2020-08-10 | [
[
"Hossain",
"Soaad",
""
]
] | The discovery of neural plasticity has proved that throughout the life of a human being, the brain reorganizes itself through forming new neural connections. The formation of new neural connections are achieved through the brain's effort to adapt to new environments or to changes in the existing environment. Despite the realization of neural plasticity, there is a lack of understanding the probability of neural plasticity occurring given some event. Using ordinary differential equations, neural firing equations and spike-train statistics, we show how an additive short-term memory (STM) equation can be formulated to approach the computation of neural plasticity. We then show how the additive STM equation can be used for probabilistic inference in computable neural plasticity, and the computation of probabilistic neural plasticity. We will also provide a brief introduction to the theory of probabilistic neural plasticity and conclude with showing how it can be applied to multiple disciplines such as behavioural science, machine learning, artificial intelligence and psychiatry. |
0810.5381 | Nikesh Dattani | Nikesh S. Dattani | Simulating neurobiological localization of acoustic signals based on
temporal and volumetric differentiations | 26 pages, 18 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The localization of sound sources by the human brain is computationally
simulated from a neurobiological perspective. The simulation includes the
neural representation of temporal differences in acoustic signals between the
ipsilateral and contralateral ears for constant sound intensities (angular
localization), and of volumetric differences in acoustic signals for constant
azimuthal angles (radial localization). The transmission of the original
acoustic signal from the environment, through each significant stage of
intermediate neurons, to the primary auditory cortex, is also simulated. The
errors that human brains make in attempting to localize sounds in
evolutionarily uncommon environments (such as when one ear is in water and one
ear is in air) are then mathematically predicted. A basic overview of the
physiology behind sound localization in the brain is also provided.
| [
{
"created": "Wed, 29 Oct 2008 23:10:38 GMT",
"version": "v1"
}
] | 2008-10-31 | [
[
"Dattani",
"Nikesh S.",
""
]
] | The localization of sound sources by the human brain is computationally simulated from a neurobiological perspective. The simulation includes the neural representation of temporal differences in acoustic signals between the ipsilateral and contralateral ears for constant sound intensities (angular localization), and of volumetric differences in acoustic signals for constant azimuthal angles (radial localization). The transmission of the original acoustic signal from the environment, through each significant stage of intermediate neurons, to the primary auditory cortex, is also simulated. The errors that human brains make in attempting to localize sounds in evolutionarily uncommon environments (such as when one ear is in water and one ear is in air) are then mathematically predicted. A basic overview of the physiology behind sound localization in the brain is also provided. |
0712.4385 | William Bialek | Gasper Tkacik and William Bialek | Cell biology: Networks, regulation, pathways | null | null | null | null | q-bio.MN | null | This review was written for the Encyclopedia of Complexity and System Science
(Springer-Verlag, Berlin, 2008), and is intended as a guide to the growing
literature which approaches the phenomena of cell biology from a more
theoretical point of view. We begin with the building blocks of cellular
networks, and proceed toward the different classes of models being explored,
finally discussing the "design principles" which have been suggested for these
systems. Although largely a dispassionate review, we do draw attention to areas
where there seems to be general consensus on ideas that have not been tested
very thoroughly and, more optimistically, to areas where we feel promising
ideas deserve to be more fully explored.
| [
{
"created": "Fri, 28 Dec 2007 18:45:14 GMT",
"version": "v1"
}
] | 2007-12-31 | [
[
"Tkacik",
"Gasper",
""
],
[
"Bialek",
"William",
""
]
] | This review was written for the Encyclopedia of Complexity and System Science (Springer-Verlag, Berlin, 2008), and is intended as a guide to the growing literature which approaches the phenomena of cell biology from a more theoretical point of view. We begin with the building blocks of cellular networks, and proceed toward the different classes of models being explored, finally discussing the "design principles" which have been suggested for these systems. Although largely a dispassionate review, we do draw attention to areas where there seems to be general consensus on ideas that have not been tested very thoroughly and, more optimistically, to areas where we feel promising ideas deserve to be more fully explored. |
2106.15159 | Shyaman Jayasundara | Shyaman Jayasundara, Sandali Lokuge, Puwasuru Ihalagedara and
Damayanthi Herath | Machine learning for plant microRNA prediction: A systematic review | null | null | null | null | q-bio.GN cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | MicroRNAs (miRNAs) are endogenous small non-coding RNAs that play an
important role in post-transcriptional gene regulation. However, the
experimental determination of miRNA sequence and structure is both expensive
and time-consuming. Therefore, computational and machine learning-based
approaches have been adopted to predict novel microRNAs. With the involvement
of data science and machine learning in biology, multiple research studies have
been conducted to find microRNAs with different computational methods and
different miRNA features. Multiple approaches are discussed in detail
considering the learning algorithm/s used, features considered, dataset/s used
and the criteria used in evaluations. This systematic review focuses on the
machine learning methods developed for miRNA identification in plants. This
will help researchers to gain a detailed idea about past studies and identify
novel paths that solve drawbacks occurred in past studies. Our findings
highlight the need for plant-specific computational methods for miRNA
identification.
| [
{
"created": "Tue, 29 Jun 2021 08:22:57 GMT",
"version": "v1"
}
] | 2021-06-30 | [
[
"Jayasundara",
"Shyaman",
""
],
[
"Lokuge",
"Sandali",
""
],
[
"Ihalagedara",
"Puwasuru",
""
],
[
"Herath",
"Damayanthi",
""
]
] | MicroRNAs (miRNAs) are endogenous small non-coding RNAs that play an important role in post-transcriptional gene regulation. However, the experimental determination of miRNA sequence and structure is both expensive and time-consuming. Therefore, computational and machine learning-based approaches have been adopted to predict novel microRNAs. With the involvement of data science and machine learning in biology, multiple research studies have been conducted to find microRNAs with different computational methods and different miRNA features. Multiple approaches are discussed in detail considering the learning algorithm/s used, features considered, dataset/s used and the criteria used in evaluations. This systematic review focuses on the machine learning methods developed for miRNA identification in plants. This will help researchers to gain a detailed idea about past studies and identify novel paths that solve drawbacks occurred in past studies. Our findings highlight the need for plant-specific computational methods for miRNA identification. |
2211.14676 | Petros Petsinis | Petros Petsinis, Andreas Pavlogiannis, Panagiotis Karras | Maximizing the Probability of Fixation in the Positional Voter Model | Accepted for publication in AAAI 2023 | null | null | null | q-bio.PE cs.CC cs.GT cs.SI | http://creativecommons.org/licenses/by/4.0/ | The Voter model is a well-studied stochastic process that models the invasion
of a novel trait $A$ (e.g., a new opinion, social meme, genetic mutation,
magnetic spin) in a network of individuals (agents, people, genes, particles)
carrying an existing resident trait $B$. Individuals change traits by
occasionally sampling the trait of a neighbor, while an invasion bias
$\delta\geq 0$ expresses the stochastic preference to adopt the novel trait $A$
over the resident trait $B$. The strength of an invasion is measured by the
probability that eventually the whole population adopts trait $A$, i.e., the
fixation probability. In more realistic settings, however, the invasion bias is
not ubiquitous, but rather manifested only in parts of the network. For
instance, when modeling the spread of a social trait, the invasion bias
represents localized incentives. In this paper, we generalize the standard
biased Voter model to the positional Voter model, in which the invasion bias is
effectuated only on an arbitrary subset of the network nodes, called biased
nodes. We study the ensuing optimization problem, which is, given a budget $k$,
to choose $k$ biased nodes so as to maximize the fixation probability of a
randomly occurring invasion. We show that the problem is NP-hard both for
finite $\delta$ and when $\delta \rightarrow \infty$ (strong bias), while the
objective function is not submodular in either setting, indicating strong
computational hardness. On the other hand, we show that, when
$\delta\rightarrow 0$ (weak bias), we can obtain a tight approximation in
$O(n^{2\omega})$ time, where $\omega$ is the matrix-multiplication exponent. We
complement our theoretical results with an experimental evaluation of some
proposed heuristics.
| [
{
"created": "Sat, 26 Nov 2022 22:43:52 GMT",
"version": "v1"
},
{
"created": "Sat, 25 Feb 2023 13:13:21 GMT",
"version": "v2"
}
] | 2023-02-28 | [
[
"Petsinis",
"Petros",
""
],
[
"Pavlogiannis",
"Andreas",
""
],
[
"Karras",
"Panagiotis",
""
]
] | The Voter model is a well-studied stochastic process that models the invasion of a novel trait $A$ (e.g., a new opinion, social meme, genetic mutation, magnetic spin) in a network of individuals (agents, people, genes, particles) carrying an existing resident trait $B$. Individuals change traits by occasionally sampling the trait of a neighbor, while an invasion bias $\delta\geq 0$ expresses the stochastic preference to adopt the novel trait $A$ over the resident trait $B$. The strength of an invasion is measured by the probability that eventually the whole population adopts trait $A$, i.e., the fixation probability. In more realistic settings, however, the invasion bias is not ubiquitous, but rather manifested only in parts of the network. For instance, when modeling the spread of a social trait, the invasion bias represents localized incentives. In this paper, we generalize the standard biased Voter model to the positional Voter model, in which the invasion bias is effectuated only on an arbitrary subset of the network nodes, called biased nodes. We study the ensuing optimization problem, which is, given a budget $k$, to choose $k$ biased nodes so as to maximize the fixation probability of a randomly occurring invasion. We show that the problem is NP-hard both for finite $\delta$ and when $\delta \rightarrow \infty$ (strong bias), while the objective function is not submodular in either setting, indicating strong computational hardness. On the other hand, we show that, when $\delta\rightarrow 0$ (weak bias), we can obtain a tight approximation in $O(n^{2\omega})$ time, where $\omega$ is the matrix-multiplication exponent. We complement our theoretical results with an experimental evaluation of some proposed heuristics. |
1202.2688 | Arni S.R. Srinivasa Rao | Arni S. R. Srinivasa Rao | Understanding Theoretically The Impact of Reporting of Disease Cases in
Epidemiology | 21 pages, 2 figures. To appear in Journal of Theoretical Biology
(Elsevier) | (2012) Journal of Theoretical Biology 302:89-95 | 10.1016/j.jtbi.2012.02.026 | null | q-bio.QM stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In conducting preliminary analysis during an epidemic, data on reported
disease cases offer key information in guiding the direction to the in-depth
analysis. Models for growth and transmission dynamics are heavily dependent on
preliminary analysis results. When a particular disease case is reported more
than once or alternatively is never reported or detected in the population,
then in such a situation, there is a possibility of existence of multiple
reporting or under reporting in the population. In this work, a theoretical
approach for studying reporting error in epidemiology is explored. The upper
bound for the error that arises due to multiple reporting is higher than that
which arises due to under reporting. Numerical examples are provided to support
the arguments. This article mainly treats reporting error as deterministic and
one can explore a stochastic model for the same.
| [
{
"created": "Mon, 13 Feb 2012 11:01:10 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Feb 2012 19:29:58 GMT",
"version": "v2"
}
] | 2021-06-15 | [
[
"Rao",
"Arni S. R. Srinivasa",
""
]
] | In conducting preliminary analysis during an epidemic, data on reported disease cases offer key information in guiding the direction to the in-depth analysis. Models for growth and transmission dynamics are heavily dependent on preliminary analysis results. When a particular disease case is reported more than once or alternatively is never reported or detected in the population, then in such a situation, there is a possibility of existence of multiple reporting or under reporting in the population. In this work, a theoretical approach for studying reporting error in epidemiology is explored. The upper bound for the error that arises due to multiple reporting is higher than that which arises due to under reporting. Numerical examples are provided to support the arguments. This article mainly treats reporting error as deterministic and one can explore a stochastic model for the same. |
2212.12538 | Toby St. Clere Smithe | Toby St Clere Smithe | Mathematical Foundations for a Compositional Account of the Bayesian
Brain | DPhil thesis, as accepted by the University of Oxford. Comments most
welcome | null | 10.5287/ora-kzjqyop2d | null | q-bio.NC cs.AI math.CT math.DS math.ST stat.TH | http://creativecommons.org/licenses/by-sa/4.0/ | This dissertation reports some first steps towards a compositional account of
active inference and the Bayesian brain. Specifically, we use the tools of
contemporary applied category theory to supply functorial semantics for
approximate inference. To do so, we define on the `syntactic' side the new
notion of Bayesian lens and show that Bayesian updating composes according to
the compositional lens pattern. Using Bayesian lenses, and inspired by
compositional game theory, we define fibrations of statistical games and
classify various problems of statistical inference as corresponding sections:
the chain rule of the relative entropy is formalized as a strict section, while
maximum likelihood estimation and the free energy give lax sections. In the
process, we introduce a new notion of `copy-composition'.
On the `semantic' side, we present a new formalization of general open
dynamical systems (particularly: deterministic, stochastic, and random; and
discrete- and continuous-time) as certain coalgebras of polynomial functors,
which we show collect into monoidal opindexed categories (or, alternatively,
into algebras for multicategories of generalized polynomial functors). We use
these opindexed categories to define monoidal bicategories of cilia: dynamical
systems which control lenses, and which supply the target for our functorial
semantics. Accordingly, we construct functors which explain the bidirectional
compositional structure of predictive coding neural circuits under the free
energy principle, thereby giving a formal mathematical underpinning to the
bidirectionality observed in the cortex. Along the way, we explain how to
compose rate-coded neural circuits using an algebra for a multicategory of
linear circuit diagrams, showing subsequently that this is subsumed by lenses
and polynomial functors.
| [
{
"created": "Fri, 23 Dec 2022 18:58:17 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Jun 2023 14:34:05 GMT",
"version": "v2"
},
{
"created": "Tue, 19 Dec 2023 15:25:42 GMT",
"version": "v3"
}
] | 2023-12-20 | [
[
"Smithe",
"Toby St Clere",
""
]
] | This dissertation reports some first steps towards a compositional account of active inference and the Bayesian brain. Specifically, we use the tools of contemporary applied category theory to supply functorial semantics for approximate inference. To do so, we define on the `syntactic' side the new notion of Bayesian lens and show that Bayesian updating composes according to the compositional lens pattern. Using Bayesian lenses, and inspired by compositional game theory, we define fibrations of statistical games and classify various problems of statistical inference as corresponding sections: the chain rule of the relative entropy is formalized as a strict section, while maximum likelihood estimation and the free energy give lax sections. In the process, we introduce a new notion of `copy-composition'. On the `semantic' side, we present a new formalization of general open dynamical systems (particularly: deterministic, stochastic, and random; and discrete- and continuous-time) as certain coalgebras of polynomial functors, which we show collect into monoidal opindexed categories (or, alternatively, into algebras for multicategories of generalized polynomial functors). We use these opindexed categories to define monoidal bicategories of cilia: dynamical systems which control lenses, and which supply the target for our functorial semantics. Accordingly, we construct functors which explain the bidirectional compositional structure of predictive coding neural circuits under the free energy principle, thereby giving a formal mathematical underpinning to the bidirectionality observed in the cortex. Along the way, we explain how to compose rate-coded neural circuits using an algebra for a multicategory of linear circuit diagrams, showing subsequently that this is subsumed by lenses and polynomial functors. |
1009.0867 | Adi Taflia | Adi Taflia and David Holcman | Estimating the synaptic current in a multi-conductance AMPA receptor
model | null | null | 10.1016/j.bpj.2011.05.032 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A pre-synaptic neuron releases diffusing neurotransmitters such as glutamate
that activate post-synaptic receptors. The amplitude of the post-synaptic
current, mostly mediated by glutamatergic (AMPARs) receptors, is a fundamental
signal that may generate an action potential. However, although various
simulation results \cite{kullman,Barbour,Raghavachari} have addressed how
synapses control the post-synaptic current, it is still unclear how this
current depends analytically on factors such as the synaptic cleft geometry,
the distribution, the number and the multi-conductance state of receptors, the
geometry of post-synaptic density (PSD) and the neurotransmitter release
location. To estimate the synaptic current maximal amplitude, we present a
semi-analytical model of glutamate diffusing in the synaptic cleft. We modeled
receptors as multi-conductance channels and we find that PSD morphological
changes can significantly modulate the synaptic current, which is maximally
reliable (the coefficient of variation is minimal) for an optimal size of the
PSD, that depends on the vesicular release active zone. The existence of an
optimal PSD size is related to nonlinear phenomena such as the multi-binding
cooperativity of the neurotransmitter to the receptors. We conclude that
changes in the PSD geometry can sustain a form of synaptic plasticity,
independent of a change in the number of receptors.
| [
{
"created": "Sat, 4 Sep 2010 20:14:57 GMT",
"version": "v1"
}
] | 2015-05-19 | [
[
"Taflia",
"Adi",
""
],
[
"Holcman",
"David",
""
]
] | A pre-synaptic neuron releases diffusing neurotransmitters such as glutamate that activate post-synaptic receptors. The amplitude of the post-synaptic current, mostly mediated by glutamatergic (AMPARs) receptors, is a fundamental signal that may generate an action potential. However, although various simulation results \cite{kullman,Barbour,Raghavachari} have addressed how synapses control the post-synaptic current, it is still unclear how this current depends analytically on factors such as the synaptic cleft geometry, the distribution, the number and the multi-conductance state of receptors, the geometry of post-synaptic density (PSD) and the neurotransmitter release location. To estimate the synaptic current maximal amplitude, we present a semi-analytical model of glutamate diffusing in the synaptic cleft. We modeled receptors as multi-conductance channels and we find that PSD morphological changes can significantly modulate the synaptic current, which is maximally reliable (the coefficient of variation is minimal) for an optimal size of the PSD, that depends on the vesicular release active zone. The existence of an optimal PSD size is related to nonlinear phenomena such as the multi-binding cooperativity of the neurotransmitter to the receptors. We conclude that changes in the PSD geometry can sustain a form of synaptic plasticity, independent of a change in the number of receptors. |
2011.08845 | Norichika Ogata | Norichika Ogata | Whole-Genome Sequence of the Trypoxylus dichotomus Japanese rhinoceros
beetle | null | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | The draft whole-genome sequence of the Japanese rhinoceros beetle, Trypoxylus
dichotomus was obtained using long-read PacBio sequence technology. The final
assembled genome consisted of 739 Mbp in 2,347 contigs, with 24.5x mean
coverage and a G+C content of 35.99%.
| [
{
"created": "Tue, 17 Nov 2020 03:00:14 GMT",
"version": "v1"
}
] | 2020-11-19 | [
[
"Ogata",
"Norichika",
""
]
] | The draft whole-genome sequence of the Japanese rhinoceros beetle, Trypoxylus dichotomus was obtained using long-read PacBio sequence technology. The final assembled genome consisted of 739 Mbp in 2,347 contigs, with 24.5x mean coverage and a G+C content of 35.99%. |
1002.3208 | Francesco Paparella | Alberto Basset, Francesco Paparella, Francesco Cozzoli | On the Allometric Scaling of Resource Intake Under Limiting Conditions | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Individual resource intake rates are known to depend on both individual body
size and resource availability. Here, we have developed a model to integrate
these two drivers, accounting explicitly for the scaling of perceived resource
availability with individual body size. The model merges a Kleiber-like scaling
law with Holling functional responses into a single mathematical framework,
involving both body-size the density of resources.
When the availability of resources is held constant the model predicts a
relationship between resource intake rates and body sizes whose log-log graph
is a concave curve. The significant deviation from a power law accounts for the
body size dependency of resource limitations. The model results are consistent
with data from both a laboratory experiment on benthic macro-invertebrates and
the available literature.
| [
{
"created": "Wed, 17 Feb 2010 18:45:45 GMT",
"version": "v1"
}
] | 2010-02-18 | [
[
"Basset",
"Alberto",
""
],
[
"Paparella",
"Francesco",
""
],
[
"Cozzoli",
"Francesco",
""
]
] | Individual resource intake rates are known to depend on both individual body size and resource availability. Here, we have developed a model to integrate these two drivers, accounting explicitly for the scaling of perceived resource availability with individual body size. The model merges a Kleiber-like scaling law with Holling functional responses into a single mathematical framework, involving both body-size the density of resources. When the availability of resources is held constant the model predicts a relationship between resource intake rates and body sizes whose log-log graph is a concave curve. The significant deviation from a power law accounts for the body size dependency of resource limitations. The model results are consistent with data from both a laboratory experiment on benthic macro-invertebrates and the available literature. |
1308.4098 | Todd J Vision | Lex E. Flagel, John H. Willis and Todd J. Vision | The standing pool of genomic structural variation in a natural
population of Mimulus guttatus | null | null | null | null | q-bio.PE q-bio.GN | http://creativecommons.org/licenses/by/3.0/ | Major unresolved questions in evolutionary genetics include determining the
contributions of different mutational sources to the total pool of genetic
variation in a species, and understanding how these different forms of genetic
variation interact with natural selection. Recent work has shown that
structural variants (insertions, deletions, inversions and transpositions) are
a major source of genetic variation, often out-numbering single nucleotide
variants in terms of total bases affected. Despite the near ubiquity of
structural variants, major questions about their interaction with natural
selection remain. For example, how does the allele frequency spectrum of
structural variants differ when compared to single nucleotide variants? How
often do structural variants affect genes, and what are the consequences? To
begin to address these questions, we have systematically identified and
characterized a large set submicroscopic insertion and deletion (indel)
variants (between 1 kb to 200 kb in length) among ten individuals from a single
natural population of the plant species Mimulus guttatus. After extensive
computational filtering, we focused on a set of 4,142 high-confidence indels
that showed an experimental validation rate of 73%. All but one of these indels
were < 200 kb. While the largest were generally at lower frequencies in the
population, a surprising number of large indels are at intermediate
frequencies. While indels overlapping with genes were much rarer than expected
by chance, nearly 600 genes were affected by an indel. NBS-LRR defense response
genes were the most enriched among the gene families affected. Most indels
associated with genes were rare and appeared to be under purifying selection,
though we do find four high-frequency derived insertion alleles that show
signatures of recent positive selection.
| [
{
"created": "Mon, 19 Aug 2013 18:55:59 GMT",
"version": "v1"
}
] | 2013-08-20 | [
[
"Flagel",
"Lex E.",
""
],
[
"Willis",
"John H.",
""
],
[
"Vision",
"Todd J.",
""
]
] | Major unresolved questions in evolutionary genetics include determining the contributions of different mutational sources to the total pool of genetic variation in a species, and understanding how these different forms of genetic variation interact with natural selection. Recent work has shown that structural variants (insertions, deletions, inversions and transpositions) are a major source of genetic variation, often out-numbering single nucleotide variants in terms of total bases affected. Despite the near ubiquity of structural variants, major questions about their interaction with natural selection remain. For example, how does the allele frequency spectrum of structural variants differ when compared to single nucleotide variants? How often do structural variants affect genes, and what are the consequences? To begin to address these questions, we have systematically identified and characterized a large set submicroscopic insertion and deletion (indel) variants (between 1 kb to 200 kb in length) among ten individuals from a single natural population of the plant species Mimulus guttatus. After extensive computational filtering, we focused on a set of 4,142 high-confidence indels that showed an experimental validation rate of 73%. All but one of these indels were < 200 kb. While the largest were generally at lower frequencies in the population, a surprising number of large indels are at intermediate frequencies. While indels overlapping with genes were much rarer than expected by chance, nearly 600 genes were affected by an indel. NBS-LRR defense response genes were the most enriched among the gene families affected. Most indels associated with genes were rare and appeared to be under purifying selection, though we do find four high-frequency derived insertion alleles that show signatures of recent positive selection. |
q-bio/0507009 | Ping Ao | P. Ao | Quantitative Measure of Stability in Gene Regulatory Networks | null | null | null | null | q-bio.QM | null | A quantitative measure of stability in stochastic dynamics starts to emerge
in recent experiments on bioswitches. This quantity, similar to the potential
function in mathematics, is deeply rooted in biology, dated back at the
beginning of quantitative description of biological processes: the adaptive
landscape of Wright (1932) and the development landscape of Waddington (1940).
Nevertheless, its quantitative implication has been frequently challenged by
biologists. Recent progresses in quantitative biology begin to meet those
outstanding challenges.
| [
{
"created": "Thu, 7 Jul 2005 00:52:46 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Ao",
"P.",
""
]
] | A quantitative measure of stability in stochastic dynamics starts to emerge in recent experiments on bioswitches. This quantity, similar to the potential function in mathematics, is deeply rooted in biology, dated back at the beginning of quantitative description of biological processes: the adaptive landscape of Wright (1932) and the development landscape of Waddington (1940). Nevertheless, its quantitative implication has been frequently challenged by biologists. Recent progresses in quantitative biology begin to meet those outstanding challenges. |
2201.04992 | Paolo Rissone | Paolo Rissone, Cristiano Valim Bizarro and Felix Ritort | Stem-loop formation drives RNA folding in mechanical unzipping
experiments | Main: 10 pages, 5 figues, 1 table; SI: 22 pages, 12 figures, 3 tables | PNAS 119 (3), 2022 | 10.1073/pnas.2025575119 | null | q-bio.BM physics.bio-ph physics.comp-ph | http://creativecommons.org/licenses/by/4.0/ | Accurate knowledge of RNA hybridization is essential for understanding RNA
structure and function. Here we mechanically unzip and rezip a 2-kbp RNA
hairpin and derive the 10 nearest-neighbor base pair (NNBP) RNA free energies
in sodium and magnesium with 0.1 kcal/mol precision using optical tweezers.
Notably, force-distance curves (FDCs) exhibit strong irreversible effects with
hysteresis and several intermediates, precluding the extraction of the NNBP
energies with currently available methods. The combination of a suitable RNA
synthesis with a tailored pulling protocol allowed us to obtain the fully
reversible FDCs necessary to derive the NNBP energies. We demonstrate the
equivalence of sodium and magnesium free-energy salt corrections at the level
of individual NNBP. To characterize the irreversibility of the
unzipping-rezipping process, we introduce a barrier energy landscape of the
stem-loop structures forming along the complementary strands, which compete
against the formation of the native hairpin. This landscape correlates with the
hysteresis observed along the FDCs. RNA sequence analysis shows that base
stacking and base pairing stabilize the stem-loops that kinetically trap the
long-lived intermediates observed in the FDC. Stem-loops formation appears as a
general mechanism to explain a wide range of behaviors observed in RNA folding.
| [
{
"created": "Thu, 13 Jan 2022 14:06:32 GMT",
"version": "v1"
}
] | 2022-01-14 | [
[
"Rissone",
"Paolo",
""
],
[
"Bizarro",
"Cristiano Valim",
""
],
[
"Ritort",
"Felix",
""
]
] | Accurate knowledge of RNA hybridization is essential for understanding RNA structure and function. Here we mechanically unzip and rezip a 2-kbp RNA hairpin and derive the 10 nearest-neighbor base pair (NNBP) RNA free energies in sodium and magnesium with 0.1 kcal/mol precision using optical tweezers. Notably, force-distance curves (FDCs) exhibit strong irreversible effects with hysteresis and several intermediates, precluding the extraction of the NNBP energies with currently available methods. The combination of a suitable RNA synthesis with a tailored pulling protocol allowed us to obtain the fully reversible FDCs necessary to derive the NNBP energies. We demonstrate the equivalence of sodium and magnesium free-energy salt corrections at the level of individual NNBP. To characterize the irreversibility of the unzipping-rezipping process, we introduce a barrier energy landscape of the stem-loop structures forming along the complementary strands, which compete against the formation of the native hairpin. This landscape correlates with the hysteresis observed along the FDCs. RNA sequence analysis shows that base stacking and base pairing stabilize the stem-loops that kinetically trap the long-lived intermediates observed in the FDC. Stem-loops formation appears as a general mechanism to explain a wide range of behaviors observed in RNA folding. |
2304.12436 | Zaixi Zhang | Zaixi Zhang, Qi Liu, Chee-Kong Lee, Chang-Yu Hsieh, Enhong Chen | An Equivariant Generative Framework for Molecular Graph-Structure
Co-Design | Under review | null | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Designing molecules with desirable physiochemical properties and
functionalities is a long-standing challenge in chemistry, material science,
and drug discovery. Recently, machine learning-based generative models have
emerged as promising approaches for \emph{de novo} molecule design. However,
further refinement of methodology is highly desired as most existing methods
lack unified modeling of 2D topology and 3D geometry information and fail to
effectively learn the structure-property relationship for molecule design. Here
we present MolCode, a roto-translation equivariant generative framework for
\underline{Mol}ecular graph-structure \underline{Co-de}sign. In MolCode, 3D
geometric information empowers the molecular 2D graph generation, which in turn
helps guide the prediction of molecular 3D structure. Extensive experimental
results show that MolCode outperforms previous methods on a series of
challenging tasks including \emph{de novo} molecule design, targeted molecule
discovery, and structure-based drug design. Particularly, MolCode not only
consistently generates valid (99.95$\%$ Validity) and diverse (98.75$\%$
Uniqueness) molecular graphs/structures with desirable properties, but also
generate drug-like molecules with high affinity to target proteins (61.8$\%$
high-affinity ratio), which demonstrates MolCode's potential applications in
material design and drug discovery. Our extensive investigation reveals that
the 2D topology and 3D geometry contain intrinsically complementary information
in molecule design, and provide new insights into machine learning-based
molecule representation and generation.
| [
{
"created": "Wed, 12 Apr 2023 13:34:22 GMT",
"version": "v1"
}
] | 2023-04-26 | [
[
"Zhang",
"Zaixi",
""
],
[
"Liu",
"Qi",
""
],
[
"Lee",
"Chee-Kong",
""
],
[
"Hsieh",
"Chang-Yu",
""
],
[
"Chen",
"Enhong",
""
]
] | Designing molecules with desirable physiochemical properties and functionalities is a long-standing challenge in chemistry, material science, and drug discovery. Recently, machine learning-based generative models have emerged as promising approaches for \emph{de novo} molecule design. However, further refinement of methodology is highly desired as most existing methods lack unified modeling of 2D topology and 3D geometry information and fail to effectively learn the structure-property relationship for molecule design. Here we present MolCode, a roto-translation equivariant generative framework for \underline{Mol}ecular graph-structure \underline{Co-de}sign. In MolCode, 3D geometric information empowers the molecular 2D graph generation, which in turn helps guide the prediction of molecular 3D structure. Extensive experimental results show that MolCode outperforms previous methods on a series of challenging tasks including \emph{de novo} molecule design, targeted molecule discovery, and structure-based drug design. Particularly, MolCode not only consistently generates valid (99.95$\%$ Validity) and diverse (98.75$\%$ Uniqueness) molecular graphs/structures with desirable properties, but also generate drug-like molecules with high affinity to target proteins (61.8$\%$ high-affinity ratio), which demonstrates MolCode's potential applications in material design and drug discovery. Our extensive investigation reveals that the 2D topology and 3D geometry contain intrinsically complementary information in molecule design, and provide new insights into machine learning-based molecule representation and generation. |
2208.03150 | Jorge Vila | Jorge A. Vila | Protein Folding: From Classical Issues to a New Perspective | 22 pages, 2 figures | null | null | null | q-bio.BM q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | The Levinthal paradox exposes many critical questions on the protein folding
problem, among which we could point out why proteins can reach their native
state in a biologically reasonable time. A proper answer to this question is of
foremost importance for evolutive biology since it enables us to understand
life as we know it. Preliminary results, based on the upper bound protein
marginal-stability limit, together with transition state theory arguments, lead
us to show that two-state proteins must reach their native state in, at most,
seconds rather than ($\sim 10^{27}$) years -- as indicated by a naive solution
of the Levinthal paradox. This outcome -- added to the amide hydrogen-exchange
protection factors analysis -- makes it possible for us to suggest how a
protein point mutations and/or post-translational modifications impact its
folding time scales but not its upper bound limit that obeys the physics ruling
the process. Noteworthy for almost 50 years, the protein folding problem --mas
the Levinthal paradox -- has been a topic of passionate debate because
Anfinsen's challenge -- how a sequence encodes its folding -- remains unsolved
despite the smashing success of accurately predicting the protein
tridimensional structures by state-of-the-art numerical-methods. Aimed to
unlock this long-standing challenge, we propose a new perspective of protein
folding, specifically, as a problem that should be devised as an 'analytic
whole' -- a Leibniz & Kant's notion. This viewpoint might help us decode
Anfinsen's challenge and, thus, open new avenues for future research in the
protein folding field.
| [
{
"created": "Fri, 5 Aug 2022 13:17:19 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Aug 2022 11:15:49 GMT",
"version": "v2"
}
] | 2022-08-09 | [
[
"Vila",
"Jorge A.",
""
]
] | The Levinthal paradox exposes many critical questions on the protein folding problem, among which we could point out why proteins can reach their native state in a biologically reasonable time. A proper answer to this question is of foremost importance for evolutive biology since it enables us to understand life as we know it. Preliminary results, based on the upper bound protein marginal-stability limit, together with transition state theory arguments, lead us to show that two-state proteins must reach their native state in, at most, seconds rather than ($\sim 10^{27}$) years -- as indicated by a naive solution of the Levinthal paradox. This outcome -- added to the amide hydrogen-exchange protection factors analysis -- makes it possible for us to suggest how a protein point mutations and/or post-translational modifications impact its folding time scales but not its upper bound limit that obeys the physics ruling the process. Noteworthy for almost 50 years, the protein folding problem --mas the Levinthal paradox -- has been a topic of passionate debate because Anfinsen's challenge -- how a sequence encodes its folding -- remains unsolved despite the smashing success of accurately predicting the protein tridimensional structures by state-of-the-art numerical-methods. Aimed to unlock this long-standing challenge, we propose a new perspective of protein folding, specifically, as a problem that should be devised as an 'analytic whole' -- a Leibniz & Kant's notion. This viewpoint might help us decode Anfinsen's challenge and, thus, open new avenues for future research in the protein folding field. |
1009.4516 | Magdoom Mohamed Kulam Najmudeen | K.N.Magdoom, D.Subramanian, V.S.Chakravarthy, B.Ravindran, Shun-ichi
Amari, N. Meenakshisundaram | Modeling Basal Ganglia for understanding Parkinsonian Reaching Movements | Neural Computation, In Press | Neural Computation (2011), 23(2), 477-516 | 10.1162/NECO_a_00073 | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We present a computational model that highlights the role of basal ganglia
(BG) in generating simple reaching movements. The model is cast within the
reinforcement learning (RL) framework with the correspondence between RL
components and neuroanatomy as follows: dopamine signal of substantia nigra
pars compacta as the Temporal Difference error, striatum as the substrate for
the Critic, and the motor cortex as the Actor. A key feature of this
neurobiological interpretation is our hypothesis that the indirect pathway is
the Explorer. Chaotic activity, originating from the indirect pathway part of
the model, drives the wandering, exploratory movements of the arm. Thus the
direct pathway subserves exploitation while the indirect pathway subserves
exploration. The motor cortex becomes more and more independent of the
corrective influence of BG, as training progresses. Reaching trajectories show
diminishing variability with training. Reaching movements associated with
Parkinson's disease (PD) are simulated by (a) reducing dopamine and (b)
degrading the complexity of indirect pathway dynamics by switching it from
chaotic to periodic behavior. Under the simulated PD conditions, the arm
exhibits PD motor symptoms like tremor, bradykinesia and undershoot. The model
echoes the notion that PD is a dynamical disease.
| [
{
"created": "Thu, 23 Sep 2010 04:20:43 GMT",
"version": "v1"
}
] | 2011-01-26 | [
[
"Magdoom",
"K. N.",
""
],
[
"Subramanian",
"D.",
""
],
[
"Chakravarthy",
"V. S.",
""
],
[
"Ravindran",
"B.",
""
],
[
"Amari",
"Shun-ichi",
""
],
[
"Meenakshisundaram",
"N.",
""
]
] | We present a computational model that highlights the role of basal ganglia (BG) in generating simple reaching movements. The model is cast within the reinforcement learning (RL) framework with the correspondence between RL components and neuroanatomy as follows: dopamine signal of substantia nigra pars compacta as the Temporal Difference error, striatum as the substrate for the Critic, and the motor cortex as the Actor. A key feature of this neurobiological interpretation is our hypothesis that the indirect pathway is the Explorer. Chaotic activity, originating from the indirect pathway part of the model, drives the wandering, exploratory movements of the arm. Thus the direct pathway subserves exploitation while the indirect pathway subserves exploration. The motor cortex becomes more and more independent of the corrective influence of BG, as training progresses. Reaching trajectories show diminishing variability with training. Reaching movements associated with Parkinson's disease (PD) are simulated by (a) reducing dopamine and (b) degrading the complexity of indirect pathway dynamics by switching it from chaotic to periodic behavior. Under the simulated PD conditions, the arm exhibits PD motor symptoms like tremor, bradykinesia and undershoot. The model echoes the notion that PD is a dynamical disease. |
q-bio/0406050 | Asher Yahalom PhD | R. Englman and A. Yahalom | Cortical Dynamics and Awareness State: An Interpretation of Observed
Interstimulus Interval Dependence in Apparent Motion | 5 pages, 1 table | Physica A, 260 (Nos. 3-4), 555 (1998) | null | null | q-bio.NC | null | In a recent paper on Cortical Dynamics, Francis and Grossberg raise the
question how visual forms and motion information are integrated to generate a
coherent percept of moving forms? In their investigation of illusory contours
(which are, like Kanizsa squares, mental constructs rather than stimuli on the
retina) they quantify the subjective impression of apparent motion between
illusory contours that are formed by two subsequent stimuli with delay times of
about 0.2 second (called the interstimulus interval ISI). The impression of
apparent motion is due to a back referral of a later experience to an earlier
time in the conscious representation. A model is developed which describes the
state of awareness in the observer in terms of a time dependent Schroedinger
equation to which a second order time derivative is added. This addition
requires as boundary conditions the values of the solution both at the
beginning and after the process. Satisfactory quantitative agreement is found
between the results of the model and the experimental results. We recall that
in the von Neumann interpretation of the collapse of the quantum mechanical
wave-function, the collapse was associated with an observer's awareness. Some
questions of causality and determinism that arise from later-time boundary
conditions are touched upon.
| [
{
"created": "Mon, 28 Jun 2004 19:47:27 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Englman",
"R.",
""
],
[
"Yahalom",
"A.",
""
]
] | In a recent paper on Cortical Dynamics, Francis and Grossberg raise the question how visual forms and motion information are integrated to generate a coherent percept of moving forms? In their investigation of illusory contours (which are, like Kanizsa squares, mental constructs rather than stimuli on the retina) they quantify the subjective impression of apparent motion between illusory contours that are formed by two subsequent stimuli with delay times of about 0.2 second (called the interstimulus interval ISI). The impression of apparent motion is due to a back referral of a later experience to an earlier time in the conscious representation. A model is developed which describes the state of awareness in the observer in terms of a time dependent Schroedinger equation to which a second order time derivative is added. This addition requires as boundary conditions the values of the solution both at the beginning and after the process. Satisfactory quantitative agreement is found between the results of the model and the experimental results. We recall that in the von Neumann interpretation of the collapse of the quantum mechanical wave-function, the collapse was associated with an observer's awareness. Some questions of causality and determinism that arise from later-time boundary conditions are touched upon. |
1308.1984 | Alan Rogers | Alan R. Rogers | How Population Growth Affects Linkage Disequilibrium | null | null | 10.1534/genetics.114.166454 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Linkage disequilibrium (LD) is often summarized using the "LD curve," which
relates the LD between pairs of sites to the distance that separates them along
the chromosome. This paper shows how the LD curve responds to changes in
population size. An expansion of population size generates an LD curve that
declines steeply, especially if that expansion has followed a bottleneck. A
reduction in size generates an LD curve that is high but relatively flat. In
European data, the curve is steep, suggesting a history of population
expansion.
These conclusions emerge from the study of $\sigma_d^2$, a measure of LD that
has never played a central role. It has been seen merely as an approximation to
another measure, $r^2$. Yet $\sigma_d^2$ has different dynamical behavior and
provides deeper time depth. Furthermore, it is easily estimated from data and
can be predicted from population history using a fast, deterministic algorithm.
| [
{
"created": "Thu, 8 Aug 2013 21:39:13 GMT",
"version": "v1"
},
{
"created": "Sun, 10 Nov 2013 22:30:45 GMT",
"version": "v2"
}
] | 2014-06-10 | [
[
"Rogers",
"Alan R.",
""
]
] | Linkage disequilibrium (LD) is often summarized using the "LD curve," which relates the LD between pairs of sites to the distance that separates them along the chromosome. This paper shows how the LD curve responds to changes in population size. An expansion of population size generates an LD curve that declines steeply, especially if that expansion has followed a bottleneck. A reduction in size generates an LD curve that is high but relatively flat. In European data, the curve is steep, suggesting a history of population expansion. These conclusions emerge from the study of $\sigma_d^2$, a measure of LD that has never played a central role. It has been seen merely as an approximation to another measure, $r^2$. Yet $\sigma_d^2$ has different dynamical behavior and provides deeper time depth. Furthermore, it is easily estimated from data and can be predicted from population history using a fast, deterministic algorithm. |
1702.04845 | Paul Taylor | Robert W. Cox, Gang Chen, Daniel R. Glen, Richard C. Reynolds, Paul A.
Taylor | FMRI Clustering in AFNI: False Positive Rates Redux | 7 figures in main text and 17 figures in Appendices; 50 pages.
Accepted in Brain Connectivity | null | null | null | q-bio.QM stat.AP | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recent reports of inflated false positive rates (FPRs) in FMRI group analysis
tools by Eklund et al. (2016) have become a large topic within (and outside)
neuroimaging. They concluded that: existing parametric methods for determining
statistically significant clusters had greatly inflated FPRs ("up to 70%,"
mainly due to the faulty assumption that the noise spatial autocorrelation
function is Gaussian- shaped and stationary), calling into question potentially
"countless" previous results; in contrast, nonparametric methods, such as their
approach, accurately reflected nominal 5% FPRs. They also stated that AFNI
showed "particularly high" FPRs compared to other software, largely due to a
bug in 3dClustSim. We comment on these points using their own results and
figures and by repeating some of their simulations. Briefly, while parametric
methods show some FPR inflation in those tests (and assumptions of
Gaussian-shaped spatial smoothness also appear to be generally incorrect),
their emphasis on reporting the single worst result from thousands of
simulation cases greatly exaggerated the scale of the problem. Importantly, FPR
statistics depend on "task" paradigm and voxelwise p-value threshold; as such,
we show how results of their study provide useful suggestions for FMRI study
design and analysis, rather than simply a catastrophic downgrading of the
field's earlier results. Regarding AFNI (which we maintain), 3dClustSim's
bug-effect was greatly overstated - their own results show that AFNI results
were not "particularly" worse than others. We describe further updates in AFNI
for characterizing spatial smoothness more appropriately (greatly reducing
FPRs, though some remain >5%); additionally, we outline two newly implemented
permutation/randomization-based approaches producing FPRs clustered much more
tightly about 5% for voxelwise p<=0.01.
| [
{
"created": "Thu, 16 Feb 2017 03:12:50 GMT",
"version": "v1"
}
] | 2017-02-17 | [
[
"Cox",
"Robert W.",
""
],
[
"Chen",
"Gang",
""
],
[
"Glen",
"Daniel R.",
""
],
[
"Reynolds",
"Richard C.",
""
],
[
"Taylor",
"Paul A.",
""
]
] | Recent reports of inflated false positive rates (FPRs) in FMRI group analysis tools by Eklund et al. (2016) have become a large topic within (and outside) neuroimaging. They concluded that: existing parametric methods for determining statistically significant clusters had greatly inflated FPRs ("up to 70%," mainly due to the faulty assumption that the noise spatial autocorrelation function is Gaussian- shaped and stationary), calling into question potentially "countless" previous results; in contrast, nonparametric methods, such as their approach, accurately reflected nominal 5% FPRs. They also stated that AFNI showed "particularly high" FPRs compared to other software, largely due to a bug in 3dClustSim. We comment on these points using their own results and figures and by repeating some of their simulations. Briefly, while parametric methods show some FPR inflation in those tests (and assumptions of Gaussian-shaped spatial smoothness also appear to be generally incorrect), their emphasis on reporting the single worst result from thousands of simulation cases greatly exaggerated the scale of the problem. Importantly, FPR statistics depend on "task" paradigm and voxelwise p-value threshold; as such, we show how results of their study provide useful suggestions for FMRI study design and analysis, rather than simply a catastrophic downgrading of the field's earlier results. Regarding AFNI (which we maintain), 3dClustSim's bug-effect was greatly overstated - their own results show that AFNI results were not "particularly" worse than others. We describe further updates in AFNI for characterizing spatial smoothness more appropriately (greatly reducing FPRs, though some remain >5%); additionally, we outline two newly implemented permutation/randomization-based approaches producing FPRs clustered much more tightly about 5% for voxelwise p<=0.01. |
1507.06004 | Alexey Shvets | Alexey A. Shvets and Anatoly B. Kolomeisky | Sequence Heterogeneity Accelerates Protein Search for Targets on DNA | 10 pages, 5 figures | null | 10.1063/1.4937938 | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The process of protein search for specific binding sites on DNA is
fundamentally important since it marks the beginning of all major biological
processes. We present a theoretical investigation that probes the role of DNA
sequence symmetry, heterogeneity and chemical composition in the protein search
dynamics. Using a discrete-state stochastic approach with a first-passage
events analysis, which takes into account the most relevant physical-chemical
processes, a full analytical description of the search dynamics is obtained. It
is found that, contrary to existing views, the protein search is generally
faster on DNA with more heterogeneous sequences. In addition, the search
dynamics might be affected by the chemical composition near the target site.
The physical origins of these phenomena are discussed. Our results suggest that
biological processes might be effectively regulated by modifying chemical
composition, symmetry and heterogeneity of a genome.
| [
{
"created": "Tue, 21 Jul 2015 22:13:21 GMT",
"version": "v1"
}
] | 2016-01-20 | [
[
"Shvets",
"Alexey A.",
""
],
[
"Kolomeisky",
"Anatoly B.",
""
]
] | The process of protein search for specific binding sites on DNA is fundamentally important since it marks the beginning of all major biological processes. We present a theoretical investigation that probes the role of DNA sequence symmetry, heterogeneity and chemical composition in the protein search dynamics. Using a discrete-state stochastic approach with a first-passage events analysis, which takes into account the most relevant physical-chemical processes, a full analytical description of the search dynamics is obtained. It is found that, contrary to existing views, the protein search is generally faster on DNA with more heterogeneous sequences. In addition, the search dynamics might be affected by the chemical composition near the target site. The physical origins of these phenomena are discussed. Our results suggest that biological processes might be effectively regulated by modifying chemical composition, symmetry and heterogeneity of a genome. |
2311.03056 | Andrew Green PhD | Andrew Green, Carlos Ribas, Nancy Ontiveros-Palacios, Sam
Griffiths-Jones, Anton I. Petrov, Alex Bateman and Blake Sweeney | LitSumm: Large language models for literature summarisation of
non-coding RNAs | null | null | null | null | q-bio.GN cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: Curation of literature in life sciences is a growing challenge.
The continued increase in the rate of publication, coupled with the relatively
fixed number of curators worldwide presents a major challenge to developers of
biomedical knowledgebases. Very few knowledgebases have resources to scale to
the whole relevant literature and all have to prioritise their efforts.
Results: In this work, we take a first step to alleviating the lack of
curator time in RNA science by generating summaries of literature for
non-coding RNAs using large language models (LLMs). We demonstrate that
high-quality, factually accurate summaries with accurate references can be
automatically generated from the literature using a commercial LLM and a chain
of prompts and checks. Manual assessment was carried out for a subset of
summaries, with the majority being rated extremely high quality. We also
applied the most commonly used automated evaluation approaches, finding that
they do not correlate with human assessment. Finally, we apply our tool to a
selection of over 4,600 ncRNAs and make the generated summaries available via
the RNAcentral resource. We conclude that automated literature summarization is
feasible with the current generation of LLMs, provided careful prompting and
automated checking are applied.
Availability: Code used to produce these summaries can be found here:
https://github.com/RNAcentral/litscan-summarization and the dataset of contexts
and summaries can be found here:
https://huggingface.co/datasets/RNAcentral/litsumm-v1. Summaries are also
displayed on the RNA report pages in RNAcentral (https://rnacentral.org/)
| [
{
"created": "Mon, 6 Nov 2023 12:22:19 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Mar 2024 15:00:57 GMT",
"version": "v2"
},
{
"created": "Fri, 19 Apr 2024 14:50:49 GMT",
"version": "v3"
}
] | 2024-04-22 | [
[
"Green",
"Andrew",
""
],
[
"Ribas",
"Carlos",
""
],
[
"Ontiveros-Palacios",
"Nancy",
""
],
[
"Griffiths-Jones",
"Sam",
""
],
[
"Petrov",
"Anton I.",
""
],
[
"Bateman",
"Alex",
""
],
[
"Sweeney",
"Blake",
""... | Motivation: Curation of literature in life sciences is a growing challenge. The continued increase in the rate of publication, coupled with the relatively fixed number of curators worldwide presents a major challenge to developers of biomedical knowledgebases. Very few knowledgebases have resources to scale to the whole relevant literature and all have to prioritise their efforts. Results: In this work, we take a first step to alleviating the lack of curator time in RNA science by generating summaries of literature for non-coding RNAs using large language models (LLMs). We demonstrate that high-quality, factually accurate summaries with accurate references can be automatically generated from the literature using a commercial LLM and a chain of prompts and checks. Manual assessment was carried out for a subset of summaries, with the majority being rated extremely high quality. We also applied the most commonly used automated evaluation approaches, finding that they do not correlate with human assessment. Finally, we apply our tool to a selection of over 4,600 ncRNAs and make the generated summaries available via the RNAcentral resource. We conclude that automated literature summarization is feasible with the current generation of LLMs, provided careful prompting and automated checking are applied. Availability: Code used to produce these summaries can be found here: https://github.com/RNAcentral/litscan-summarization and the dataset of contexts and summaries can be found here: https://huggingface.co/datasets/RNAcentral/litsumm-v1. Summaries are also displayed on the RNA report pages in RNAcentral (https://rnacentral.org/) |
2005.06504 | Emilio Gallicchio | Sheenam Khuttan, Solmaz Azimi, Joe Z. Wu, Emilio Gallicchio | Alchemical Transformations for Concerted Hydration Free Energy
Estimation with Explicit Solvation | null | null | 10.1063/5.0036944 | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | We present a family of alchemical perturbation potentials that enable the
calculation of hydration free energies of small to medium-sized molecules in a
concerted single alchemical coupling step instead of the commonly used sequence
of two distinct coupling steps for Lennard-Jones and electrostatic
interactions. The perturbation potentials are based on the softplus function of
the solute-solvent interaction energy designed to focus sampling near entropic
bottlenecks along the alchemical pathway. We present a general framework to
optimize the parameters of alchemical perturbation potentials of this kind. The
optimization procedure is based on the $\lambda$-function formalism and the
maximum-likelihood parameter estimation procedure we developed earlier to avoid
the occurrence of multi-modal distributions of the coupling energy along the
alchemical path. A novel soft-core function applied to the overall
solute-solvent interaction energy rather than individual interatomic pair
potentials critical for this result is also presented. Because it does not
require modifications of core force and energy routines, the soft-core
formulation can be easily deployed in molecular dynamics simulation codes. We
illustrate the method by applying it to the estimation of the hydration free
energy in water droplets of compounds of varying size and complexity. In each
case, we show that convergence of the hydration free energy is achieved
rapidly. This work paves the way for the ongoing development of more
streamlined algorithms to estimate free energies of molecular binding with
explicit solvation.
| [
{
"created": "Wed, 13 May 2020 18:18:45 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Nov 2020 15:59:41 GMT",
"version": "v2"
}
] | 2021-02-24 | [
[
"Khuttan",
"Sheenam",
""
],
[
"Azimi",
"Solmaz",
""
],
[
"Wu",
"Joe Z.",
""
],
[
"Gallicchio",
"Emilio",
""
]
] | We present a family of alchemical perturbation potentials that enable the calculation of hydration free energies of small to medium-sized molecules in a concerted single alchemical coupling step instead of the commonly used sequence of two distinct coupling steps for Lennard-Jones and electrostatic interactions. The perturbation potentials are based on the softplus function of the solute-solvent interaction energy designed to focus sampling near entropic bottlenecks along the alchemical pathway. We present a general framework to optimize the parameters of alchemical perturbation potentials of this kind. The optimization procedure is based on the $\lambda$-function formalism and the maximum-likelihood parameter estimation procedure we developed earlier to avoid the occurrence of multi-modal distributions of the coupling energy along the alchemical path. A novel soft-core function applied to the overall solute-solvent interaction energy rather than individual interatomic pair potentials critical for this result is also presented. Because it does not require modifications of core force and energy routines, the soft-core formulation can be easily deployed in molecular dynamics simulation codes. We illustrate the method by applying it to the estimation of the hydration free energy in water droplets of compounds of varying size and complexity. In each case, we show that convergence of the hydration free energy is achieved rapidly. This work paves the way for the ongoing development of more streamlined algorithms to estimate free energies of molecular binding with explicit solvation. |
q-bio/0503040 | Chikara Furusawa | Chikara Furusawa, Takao Suzuki, Akiko Kashiwagi, Tetsuya Yomo, and
Kunihiko Kaneko | Ubiquity of Log-normal Distributions in Intra-cellular Reaction Dynamic | 15 pages, 4 figures. BIOPHYSICS, in press | BIOPHYSICS, 1 (2005) pp. 25 | null | null | q-bio.MN | null | The discovery of two fundamental laws concerning cellular dynamics with
recursive growth is reported. First, the chemical abundances measured over many
cells are found to obey a log-normal distribution and second, the relationship
between the average and standard deviation of the abundances is found to be
linear. The ubiquity of the laws is explored both theoretically and
experimentally. First by means of a model with a catalytic reaction network,
the laws are shown to appear near the critical state with efficient
self-reproduction. Second by measuring distributions of fluorescent proteins in
bacteria cells the ubiquity of log-normal distribution of protein abundances is
confirmed. Relevance of these findings to cellular function and biological
plasticity is briefly discussed.
| [
{
"created": "Tue, 29 Mar 2005 07:14:02 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Furusawa",
"Chikara",
""
],
[
"Suzuki",
"Takao",
""
],
[
"Kashiwagi",
"Akiko",
""
],
[
"Yomo",
"Tetsuya",
""
],
[
"Kaneko",
"Kunihiko",
""
]
] | The discovery of two fundamental laws concerning cellular dynamics with recursive growth is reported. First, the chemical abundances measured over many cells are found to obey a log-normal distribution and second, the relationship between the average and standard deviation of the abundances is found to be linear. The ubiquity of the laws is explored both theoretically and experimentally. First by means of a model with a catalytic reaction network, the laws are shown to appear near the critical state with efficient self-reproduction. Second by measuring distributions of fluorescent proteins in bacteria cells the ubiquity of log-normal distribution of protein abundances is confirmed. Relevance of these findings to cellular function and biological plasticity is briefly discussed. |
2104.14005 | Karishma Chhugani | Sergey Knyazev, Karishma Chhugani, Varuni Sarwal, Ram Ayyala, Harman
Singh, Smruthi Karthikeyan, Dhrithi Deshpande, Zoia Comarova, Angela Lu, Yuri
Porozov, Aiping Wu, Malak Abedalthagafi, Shivashankar Nagaraj, Adam Smith,
Pavel Skums, Jason Ladner, Tommy Tsan-Yuk Lam, Nicholas Wu, Alex Zelikovsky,
Rob Knight, Keith Crandall, Serghei Mangul | Unlocking capacities of viral genomics for the COVID-19 pandemic
response | null | null | null | null | q-bio.GN q-bio.PE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | More than any other infectious disease epidemic, the COVID-19 pandemic has
been characterized by the generation of large volumes of viral genomic data at
an incredible pace due to recent advances in high-throughput sequencing
technologies, the rapid global spread of SARS-CoV-2, and its persistent threat
to public health. However, distinguishing the most epidemiologically relevant
information encoded in these vast amounts of data requires substantial effort
across the research and public health communities. Studies of SARS-CoV-2
genomes have been critical in tracking the spread of variants and understanding
its epidemic dynamics, and may prove crucial for controlling future epidemics
and alleviating significant public health burdens. Together, genomic data and
bioinformatics methods enable broad-scale investigations of the spread of
SARS-CoV-2 at the local, national, and global scales and allow researchers the
ability to efficiently track the emergence of novel variants, reconstruct
epidemic dynamics, and provide important insights into drug and vaccine
development and disease control. Here, we discuss the tremendous opportunities
that genomics offers to unlock the effective use of SARS-CoV-2 genomic data for
efficient public health surveillance and guiding timely responses to COVID-19.
| [
{
"created": "Wed, 28 Apr 2021 20:22:38 GMT",
"version": "v1"
},
{
"created": "Tue, 4 May 2021 17:19:11 GMT",
"version": "v2"
},
{
"created": "Fri, 4 Jun 2021 17:31:18 GMT",
"version": "v3"
}
] | 2021-06-07 | [
[
"Knyazev",
"Sergey",
""
],
[
"Chhugani",
"Karishma",
""
],
[
"Sarwal",
"Varuni",
""
],
[
"Ayyala",
"Ram",
""
],
[
"Singh",
"Harman",
""
],
[
"Karthikeyan",
"Smruthi",
""
],
[
"Deshpande",
"Dhrithi",
""
],... | More than any other infectious disease epidemic, the COVID-19 pandemic has been characterized by the generation of large volumes of viral genomic data at an incredible pace due to recent advances in high-throughput sequencing technologies, the rapid global spread of SARS-CoV-2, and its persistent threat to public health. However, distinguishing the most epidemiologically relevant information encoded in these vast amounts of data requires substantial effort across the research and public health communities. Studies of SARS-CoV-2 genomes have been critical in tracking the spread of variants and understanding its epidemic dynamics, and may prove crucial for controlling future epidemics and alleviating significant public health burdens. Together, genomic data and bioinformatics methods enable broad-scale investigations of the spread of SARS-CoV-2 at the local, national, and global scales and allow researchers the ability to efficiently track the emergence of novel variants, reconstruct epidemic dynamics, and provide important insights into drug and vaccine development and disease control. Here, we discuss the tremendous opportunities that genomics offers to unlock the effective use of SARS-CoV-2 genomic data for efficient public health surveillance and guiding timely responses to COVID-19. |
2210.12068 | Jason Toy | Jason Toy | Grid cells and their potential application in AI | null | null | null | null | q-bio.NC cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Since their Nobel Prize winning discovery in 2005, grid cells have been
studied extensively by neuroscientists. Their multi-scale periodic firing rates
tiling the environment as the animal moves around has been shown as critical
for path integration. Multiple experiments have shown that grid cells also fire
for other representations such as olfactory, attention mechanisms, imagined
movement, and concept organization potentially acting as a form of neural
recycling and showing the possible brain mechanism for cognitive maps that
Tolman envisioned in 1948. Grid cell integration into artificial neural
networks may enable more robust, generalized, and smarter computers. In this
paper we give an overview of grid cell research since their discovery, their
role in neuroscience and cognitive science, and possible future directions of
artificial intelligence research.
| [
{
"created": "Wed, 12 Oct 2022 22:46:12 GMT",
"version": "v1"
}
] | 2022-10-24 | [
[
"Toy",
"Jason",
""
]
] | Since their Nobel Prize winning discovery in 2005, grid cells have been studied extensively by neuroscientists. Their multi-scale periodic firing rates tiling the environment as the animal moves around has been shown as critical for path integration. Multiple experiments have shown that grid cells also fire for other representations such as olfactory, attention mechanisms, imagined movement, and concept organization potentially acting as a form of neural recycling and showing the possible brain mechanism for cognitive maps that Tolman envisioned in 1948. Grid cell integration into artificial neural networks may enable more robust, generalized, and smarter computers. In this paper we give an overview of grid cell research since their discovery, their role in neuroscience and cognitive science, and possible future directions of artificial intelligence research. |
2107.07834 | Kristina Wicke | Magnus Bordewich, Charles Semple, Kristina Wicke | On the Complexity of Optimising Variants of Phylogenetic Diversity on
Phylogenetic Networks | 22 pages, 4 figures | null | null | null | q-bio.PE math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phylogenetic Diversity (PD) is a prominent quantitative measure of the
biodiversity of a collection of present-day species (taxa). This measure is
based on the evolutionary distance among the species in the collection. Loosely
speaking, if $\mathcal{T}$ is a rooted phylogenetic tree whose leaf set $X$
represents a set of species and whose edges have real-valued lengths (weights),
then the PD score of a subset $S$ of $X$ is the sum of the weights of the edges
of the minimal subtree of $\mathcal{T}$ connecting the species in $S$. In this
paper, we define several natural variants of the PD score for a subset of taxa
which are related by a known rooted phylogenetic network. Under these variants,
we explore, for a positive integer $k$, the computational complexity of
determining the maximum PD score over all subsets of taxa of size $k$ when the
input is restricted to different classes of rooted phylogenetic networks
| [
{
"created": "Fri, 16 Jul 2021 11:43:35 GMT",
"version": "v1"
}
] | 2021-07-20 | [
[
"Bordewich",
"Magnus",
""
],
[
"Semple",
"Charles",
""
],
[
"Wicke",
"Kristina",
""
]
] | Phylogenetic Diversity (PD) is a prominent quantitative measure of the biodiversity of a collection of present-day species (taxa). This measure is based on the evolutionary distance among the species in the collection. Loosely speaking, if $\mathcal{T}$ is a rooted phylogenetic tree whose leaf set $X$ represents a set of species and whose edges have real-valued lengths (weights), then the PD score of a subset $S$ of $X$ is the sum of the weights of the edges of the minimal subtree of $\mathcal{T}$ connecting the species in $S$. In this paper, we define several natural variants of the PD score for a subset of taxa which are related by a known rooted phylogenetic network. Under these variants, we explore, for a positive integer $k$, the computational complexity of determining the maximum PD score over all subsets of taxa of size $k$ when the input is restricted to different classes of rooted phylogenetic networks |
1301.0513 | Jacob Oppenheim | Jacob N. Oppenheim, Pavel Isakov, and Marcelo O. Magnasco | Minimal Bounds on Nonlinearity in Auditory Processing | 9 pages, 3 figures | null | null | null | q-bio.NC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time-reversal symmetry breaking is a key feature of nearly all natural
sounds, caused by the physics of sound production. While attention has been
paid to the response of the auditory system to "natural stimuli," very few
psychophysical tests have been performed. We conduct psychophysical
measurements of time-frequency acuity for both "natural" notes (sharp attack,
long decay) and time-reversed ones. Our results demonstrate significantly
greater precision, arising from enhanced temporal acuity, for such "natural"
sounds over both their time-reversed versions and theoretically optimal
gaussian pulses, without a corresponding decrease in frequency acuity. These
data rule out models of auditory processing that obey a modified "uncertainty
principle" between temporal and frequency acuity and suggest the existence of
statistical priors for naturalistic stimuli, in the form of sharp-attack,
long-decay notes. We are additionally able to calculate a minimal theoretical
bound on the order of the nonlinearity present in auditory processing. We find
that only matching pursuit, spectral derivatives, and reassigned spectrograms
are able to satisfy this criterion.
| [
{
"created": "Thu, 3 Jan 2013 17:24:46 GMT",
"version": "v1"
}
] | 2013-01-04 | [
[
"Oppenheim",
"Jacob N.",
""
],
[
"Isakov",
"Pavel",
""
],
[
"Magnasco",
"Marcelo O.",
""
]
] | Time-reversal symmetry breaking is a key feature of nearly all natural sounds, caused by the physics of sound production. While attention has been paid to the response of the auditory system to "natural stimuli," very few psychophysical tests have been performed. We conduct psychophysical measurements of time-frequency acuity for both "natural" notes (sharp attack, long decay) and time-reversed ones. Our results demonstrate significantly greater precision, arising from enhanced temporal acuity, for such "natural" sounds over both their time-reversed versions and theoretically optimal gaussian pulses, without a corresponding decrease in frequency acuity. These data rule out models of auditory processing that obey a modified "uncertainty principle" between temporal and frequency acuity and suggest the existence of statistical priors for naturalistic stimuli, in the form of sharp-attack, long-decay notes. We are additionally able to calculate a minimal theoretical bound on the order of the nonlinearity present in auditory processing. We find that only matching pursuit, spectral derivatives, and reassigned spectrograms are able to satisfy this criterion. |
1303.0090 | David Bowler | Milica Todorovi\'c and D. R. Bowler and M. J. Gillan and Tsuyoshi
Miyazaki | Density-functional theory study of gramicidin A ion channel geometry and
electronic properties | 15 pages, six figures, accepted for publication in J. Roy. Soc.
Interface | null | null | null | q-bio.BM physics.bio-ph physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the mechanisms underlying ion channel function from the
atomic-scale requires accurate ab initio modelling as well as careful
experiments. Here, we present a density functional theory (DFT) study of the
ion channel gramicidin A, whose inner pore conducts only monovalent cations and
whose conductance has been shown to depend on the side chains of the amino
acids in the channel. We investigate the ground-state geometry and electronic
properties of the channel in vacuum, focusing on their dependence on the side
chains of the amino acids. We find that the side chains affect the ground state
geometry, while the electrostatic potential of the pore is independent of the
side chains. This study is also in preparation for a full, linear scaling DFT
study of gramicidin A in a lipid bilayer with surrounding water. We demonstrate
that linear scaling DFT methods can accurately model the system with reasonable
computational cost. Linear scaling DFT allows ab initio calculations with
10,000 to 100,000 atoms and beyond, and will be an important new tool for
biomolecular simulations.
| [
{
"created": "Fri, 1 Mar 2013 05:50:37 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Mar 2013 01:29:41 GMT",
"version": "v2"
},
{
"created": "Wed, 4 Sep 2013 11:11:27 GMT",
"version": "v3"
}
] | 2013-09-05 | [
[
"Todorović",
"Milica",
""
],
[
"Bowler",
"D. R.",
""
],
[
"Gillan",
"M. J.",
""
],
[
"Miyazaki",
"Tsuyoshi",
""
]
] | Understanding the mechanisms underlying ion channel function from the atomic-scale requires accurate ab initio modelling as well as careful experiments. Here, we present a density functional theory (DFT) study of the ion channel gramicidin A, whose inner pore conducts only monovalent cations and whose conductance has been shown to depend on the side chains of the amino acids in the channel. We investigate the ground-state geometry and electronic properties of the channel in vacuum, focusing on their dependence on the side chains of the amino acids. We find that the side chains affect the ground state geometry, while the electrostatic potential of the pore is independent of the side chains. This study is also in preparation for a full, linear scaling DFT study of gramicidin A in a lipid bilayer with surrounding water. We demonstrate that linear scaling DFT methods can accurately model the system with reasonable computational cost. Linear scaling DFT allows ab initio calculations with 10,000 to 100,000 atoms and beyond, and will be an important new tool for biomolecular simulations. |
1709.09904 | Manuel Camb\'on | Manuel Camb\'on | Analysis of biochemical mechanisms provoking differential spatial
expression in Hh target genes | null | null | null | null | q-bio.MN physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work analyses the transcriptional effects of some biochemical mechanisms
proposed in previous literature which attempts to explain the differential
spatial expression of Hedgehog target genes involved in Drosophila development.
Specifically, the expression of decapentaplegic and patched, genes whose
transcription is believed to be controlled by the activator and repressor forms
of the transcription factor Cubitus interruptus (Ci). This study is based on a
thermodynamic approach which provides binding equilibrium weighted average rate
expressions for genes controlled by transcription factors competing and
(possibly) cooperating for common binding sites, in the same way that Ci's
activator and repressor forms might do. These expressions are refined to
produce simpler equivalent formulae allowing their mathematical analysis.
Thanks to this, we can evaluate the correlation between several molecular
processes and biological features observed at tissular level. In particular, we
will focus on how high/low/differential affinity and null/total/partial
cooperation modify the activation/repression regions of the target genes or
provoke signal modulation.
| [
{
"created": "Thu, 28 Sep 2017 11:56:01 GMT",
"version": "v1"
}
] | 2017-09-29 | [
[
"Cambón",
"Manuel",
""
]
] | This work analyses the transcriptional effects of some biochemical mechanisms proposed in previous literature which attempts to explain the differential spatial expression of Hedgehog target genes involved in Drosophila development. Specifically, the expression of decapentaplegic and patched, genes whose transcription is believed to be controlled by the activator and repressor forms of the transcription factor Cubitus interruptus (Ci). This study is based on a thermodynamic approach which provides binding equilibrium weighted average rate expressions for genes controlled by transcription factors competing and (possibly) cooperating for common binding sites, in the same way that Ci's activator and repressor forms might do. These expressions are refined to produce simpler equivalent formulae allowing their mathematical analysis. Thanks to this, we can evaluate the correlation between several molecular processes and biological features observed at tissular level. In particular, we will focus on how high/low/differential affinity and null/total/partial cooperation modify the activation/repression regions of the target genes or provoke signal modulation. |
1305.4160 | James Trousdale | James Trousdale, Yu Hu, Eric Shea-Brown and Kre\v{s}imir Josi\'c | A generative spike train model with time-structured higher order
correlations | null | null | null | null | q-bio.NC math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Emerging technologies are revealing the spiking activity in ever larger
neural ensembles. Frequently, this spiking is far from independent, with
correlations in the spike times of different cells. Understanding how such
correlations impact the dynamics and function of neural ensembles remains an
important open problem. Here we describe a new, generative model for correlated
spike trains that can exhibit many of the features observed in data. Extending
prior work in mathematical finance, this generalized thinning and shift (GTaS)
model creates marginally Poisson spike trains with diverse temporal correlation
structures. We give several examples which highlight the model's flexibility
and utility. For instance, we use it to examine how a neural network responds
to highly structured patterns of inputs. We then show that the GTaS model is
analytically tractable, and derive cumulant densities of all orders in terms of
model parameters. The GTaS framework can therefore be an important tool in the
experimental and theoretical exploration of neural dynamics.
| [
{
"created": "Fri, 17 May 2013 19:17:38 GMT",
"version": "v1"
}
] | 2013-05-20 | [
[
"Trousdale",
"James",
""
],
[
"Hu",
"Yu",
""
],
[
"Shea-Brown",
"Eric",
""
],
[
"Josić",
"Krešimir",
""
]
] | Emerging technologies are revealing the spiking activity in ever larger neural ensembles. Frequently, this spiking is far from independent, with correlations in the spike times of different cells. Understanding how such correlations impact the dynamics and function of neural ensembles remains an important open problem. Here we describe a new, generative model for correlated spike trains that can exhibit many of the features observed in data. Extending prior work in mathematical finance, this generalized thinning and shift (GTaS) model creates marginally Poisson spike trains with diverse temporal correlation structures. We give several examples which highlight the model's flexibility and utility. For instance, we use it to examine how a neural network responds to highly structured patterns of inputs. We then show that the GTaS model is analytically tractable, and derive cumulant densities of all orders in terms of model parameters. The GTaS framework can therefore be an important tool in the experimental and theoretical exploration of neural dynamics. |
2006.12148 | Kuan-Ting Chou | Chen-Zhi Su, Kuan-Ting Chou, Hsuan-Pei Huang, Chung-Chuan Lo, and
Daw-Wei Wang | Identification of Neuronal Polarity by Node-Based Machine Learning | Manuscript: 18 pages and 9 figures; Appendix: 14 pages, 5 figures,
and 2 tables | null | 10.1101/2020.06.20.160564 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Identify the directions of signal flows in neural networks is one of the most
important stages for understanding the intricate information dynamics of a
living brain. Using a dataset of 213 projection neurons distributed in
different regions of Drosophila brain, we develop a powerful machine learning
algorithm: node-based polarity identifier of neurons (NPIN). The proposed model
is trained by nodal information only and includes both Soma Features (which
contain spatial information from a given node to a soma) and Local Features
(which contain morphological information of a given node). After including the
spatial correlations between nodal polarities, our NPIN provided extremely high
accuracy (>96.0%) for the classification of neuronal polarity, even for complex
neurons with more than two dendrite/axon clusters. Finally, we further apply
NPIN to classify the neuronal polarity of the blowfly, which has much less
neuronal data available. Our results demonstrate that NPIN is a powerful tool
to identify the neuronal polarity of insects and to map out the signal flows in
the brain's neural networks.
| [
{
"created": "Mon, 22 Jun 2020 11:24:51 GMT",
"version": "v1"
}
] | 2020-06-23 | [
[
"Su",
"Chen-Zhi",
""
],
[
"Chou",
"Kuan-Ting",
""
],
[
"Huang",
"Hsuan-Pei",
""
],
[
"Lo",
"Chung-Chuan",
""
],
[
"Wang",
"Daw-Wei",
""
]
] | Identify the directions of signal flows in neural networks is one of the most important stages for understanding the intricate information dynamics of a living brain. Using a dataset of 213 projection neurons distributed in different regions of Drosophila brain, we develop a powerful machine learning algorithm: node-based polarity identifier of neurons (NPIN). The proposed model is trained by nodal information only and includes both Soma Features (which contain spatial information from a given node to a soma) and Local Features (which contain morphological information of a given node). After including the spatial correlations between nodal polarities, our NPIN provided extremely high accuracy (>96.0%) for the classification of neuronal polarity, even for complex neurons with more than two dendrite/axon clusters. Finally, we further apply NPIN to classify the neuronal polarity of the blowfly, which has much less neuronal data available. Our results demonstrate that NPIN is a powerful tool to identify the neuronal polarity of insects and to map out the signal flows in the brain's neural networks. |
2407.15202 | Qizhi Pei | Qizhi Pei, Lijun Wu, Zhenyu He, Jinhua Zhu, Yingce Xia, Shufang Xie,
Rui Yan | Exploiting Pre-trained Models for Drug Target Affinity Prediction with
Nearest Neighbors | Accepted by 33rd ACM International Conference on Information and
Knowledge Management 2024 (CIKM 2024) | null | 10.1145/3627673.3679704 | null | q-bio.BM cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Drug-Target binding Affinity (DTA) prediction is essential for drug
discovery. Despite the application of deep learning methods to DTA prediction,
the achieved accuracy remain suboptimal. In this work, inspired by the recent
success of retrieval methods, we propose $k$NN-DTA, a non-parametric
embedding-based retrieval method adopted on a pre-trained DTA prediction model,
which can extend the power of the DTA model with no or negligible cost.
Different from existing methods, we introduce two neighbor aggregation ways
from both embedding space and label space that are integrated into a unified
framework. Specifically, we propose a \emph{label aggregation} with
\emph{pair-wise retrieval} and a \emph{representation aggregation} with
\emph{point-wise retrieval} of the nearest neighbors. This method executes in
the inference phase and can efficiently boost the DTA prediction performance
with no training cost. In addition, we propose an extension, Ada-$k$NN-DTA, an
instance-wise and adaptive aggregation with lightweight learning. Results on
four benchmark datasets show that $k$NN-DTA brings significant improvements,
outperforming previous state-of-the-art (SOTA) results, e.g, on BindingDB
IC$_{50}$ and $K_i$ testbeds, $k$NN-DTA obtains new records of RMSE
$\bf{0.684}$ and $\bf{0.750}$. The extended Ada-$k$NN-DTA further improves the
performance to be $\bf{0.675}$ and $\bf{0.735}$ RMSE. These results strongly
prove the effectiveness of our method. Results in other settings and
comprehensive studies/analyses also show the great potential of our $k$NN-DTA
approach.
| [
{
"created": "Sun, 21 Jul 2024 15:49:05 GMT",
"version": "v1"
}
] | 2024-07-23 | [
[
"Pei",
"Qizhi",
""
],
[
"Wu",
"Lijun",
""
],
[
"He",
"Zhenyu",
""
],
[
"Zhu",
"Jinhua",
""
],
[
"Xia",
"Yingce",
""
],
[
"Xie",
"Shufang",
""
],
[
"Yan",
"Rui",
""
]
] | Drug-Target binding Affinity (DTA) prediction is essential for drug discovery. Despite the application of deep learning methods to DTA prediction, the achieved accuracy remain suboptimal. In this work, inspired by the recent success of retrieval methods, we propose $k$NN-DTA, a non-parametric embedding-based retrieval method adopted on a pre-trained DTA prediction model, which can extend the power of the DTA model with no or negligible cost. Different from existing methods, we introduce two neighbor aggregation ways from both embedding space and label space that are integrated into a unified framework. Specifically, we propose a \emph{label aggregation} with \emph{pair-wise retrieval} and a \emph{representation aggregation} with \emph{point-wise retrieval} of the nearest neighbors. This method executes in the inference phase and can efficiently boost the DTA prediction performance with no training cost. In addition, we propose an extension, Ada-$k$NN-DTA, an instance-wise and adaptive aggregation with lightweight learning. Results on four benchmark datasets show that $k$NN-DTA brings significant improvements, outperforming previous state-of-the-art (SOTA) results, e.g, on BindingDB IC$_{50}$ and $K_i$ testbeds, $k$NN-DTA obtains new records of RMSE $\bf{0.684}$ and $\bf{0.750}$. The extended Ada-$k$NN-DTA further improves the performance to be $\bf{0.675}$ and $\bf{0.735}$ RMSE. These results strongly prove the effectiveness of our method. Results in other settings and comprehensive studies/analyses also show the great potential of our $k$NN-DTA approach. |
2401.03068 | Richard Abdill | Richard J. Abdill, Emma Talarico, Laura Grieneisen | A how-to guide for code-sharing in biology | 19 pages, 1 figure; for supporting data see
https://doi.org/10.5281/zenodo.10459940 | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by/4.0/ | Computational biology continues to spread into new fields, becoming more
accessible to researchers trained in the wet lab who are eager to take
advantage of growing datasets, falling costs, and novel assays that present new
opportunities for discovery even outside of the much-discussed developments in
artificial intelligence. However, guidance for implementing these techniques is
much easier to find than guidance for reporting their use, leaving biologists
to guess which details and files are relevant. Here, we provide a set of
recommendations for sharing code, with an eye toward guiding those who are
comparatively new to applying open science principles to their computational
work. Additionally, we review existing literature on the topic, summarize the
most common tips, and evaluate the code-sharing policies of the most
influential journals in biology, which occasionally encourage code-sharing but
seldom require it. Taken together, we provide a user manual for biologists who
seek to follow code-sharing best practices but are unsure where to start.
| [
{
"created": "Fri, 5 Jan 2024 21:22:44 GMT",
"version": "v1"
}
] | 2024-01-09 | [
[
"Abdill",
"Richard J.",
""
],
[
"Talarico",
"Emma",
""
],
[
"Grieneisen",
"Laura",
""
]
] | Computational biology continues to spread into new fields, becoming more accessible to researchers trained in the wet lab who are eager to take advantage of growing datasets, falling costs, and novel assays that present new opportunities for discovery even outside of the much-discussed developments in artificial intelligence. However, guidance for implementing these techniques is much easier to find than guidance for reporting their use, leaving biologists to guess which details and files are relevant. Here, we provide a set of recommendations for sharing code, with an eye toward guiding those who are comparatively new to applying open science principles to their computational work. Additionally, we review existing literature on the topic, summarize the most common tips, and evaluate the code-sharing policies of the most influential journals in biology, which occasionally encourage code-sharing but seldom require it. Taken together, we provide a user manual for biologists who seek to follow code-sharing best practices but are unsure where to start. |
1702.02485 | Laurent Perrinet | Laurent U Perrinet (INT) | Biologically-inspired characterization of sparseness in natural images | arXiv admin note: substantial text overlap with arXiv:1611.06834 | 6th European Workshop on Visual Information Processing (EUVIP),
Oct 2016, Marseille, France. pp.1--6, 2016 | 10.1109/EUVIP.2016.7764592 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Natural images follow statistics inherited by the structure of our physical
(visual) environment. In particular, a prominent facet of this structure is
that images can be described by a relatively sparse number of features. We
designed a sparse coding algorithm biologically-inspired by the architecture of
the primary visual cortex. We show here that coefficients of this
representation exhibit a heavy-tailed distribution. For each image, the
parameters of this distribution characterize sparseness and vary from image to
image. To investigate the role of this sparseness, we designed a new class of
random textured stimuli with a controlled sparseness value inspired by our
measurements on natural images. Then, we provide with a method to synthesize
random textures images with a given statistics for sparseness that matches that
of some given class of natural images and provide perspectives for their use in
neurophysiology.
| [
{
"created": "Wed, 8 Feb 2017 15:57:57 GMT",
"version": "v1"
}
] | 2017-02-09 | [
[
"Perrinet",
"Laurent U",
"",
"INT"
]
] | Natural images follow statistics inherited by the structure of our physical (visual) environment. In particular, a prominent facet of this structure is that images can be described by a relatively sparse number of features. We designed a sparse coding algorithm biologically-inspired by the architecture of the primary visual cortex. We show here that coefficients of this representation exhibit a heavy-tailed distribution. For each image, the parameters of this distribution characterize sparseness and vary from image to image. To investigate the role of this sparseness, we designed a new class of random textured stimuli with a controlled sparseness value inspired by our measurements on natural images. Then, we provide with a method to synthesize random textures images with a given statistics for sparseness that matches that of some given class of natural images and provide perspectives for their use in neurophysiology. |
1512.01156 | Vince Grolmusz | Bal\'azs Szalkai, B\'alint Varga, Vince Grolmusz | The Advantage is at the Ladies: Brain Size Bias-Compensated
Graph-Theoretical Parameters are Also Better in Women's Connectomes | arXiv admin note: substantial text overlap with arXiv:1501.00727 | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In our previous study we have shown that the female connectomes have
significantly better, deep graph-theoretical parameters, related to superior
"connectivity", than the connectome of the males. Since the average female
brain is smaller than the average male brain, one cannot rule out that the
significant advantages are due to the size- and not to the sex-differences in
the data. To filter out the possible brain-volume related artifacts, we have
chosen 36 small male and 36 large female brains such that all the brains in the
female set are larger than all the brains in the male set. For the sets, we
have computed the corresponding braingraphs and computed numerous
graph-theoretical parameters. We have found that (i) the small male brains lack
the better connectivity advantages shown in our previous study for female
brains in general; (ii) in numerous parameters, the connectomes computed from
the large-brain females, still have the significant, deep connectivity
advantages, demonstrated in our previous study.
| [
{
"created": "Thu, 3 Dec 2015 16:50:32 GMT",
"version": "v1"
}
] | 2015-12-04 | [
[
"Szalkai",
"Balázs",
""
],
[
"Varga",
"Bálint",
""
],
[
"Grolmusz",
"Vince",
""
]
] | In our previous study we have shown that the female connectomes have significantly better, deep graph-theoretical parameters, related to superior "connectivity", than the connectome of the males. Since the average female brain is smaller than the average male brain, one cannot rule out that the significant advantages are due to the size- and not to the sex-differences in the data. To filter out the possible brain-volume related artifacts, we have chosen 36 small male and 36 large female brains such that all the brains in the female set are larger than all the brains in the male set. For the sets, we have computed the corresponding braingraphs and computed numerous graph-theoretical parameters. We have found that (i) the small male brains lack the better connectivity advantages shown in our previous study for female brains in general; (ii) in numerous parameters, the connectomes computed from the large-brain females, still have the significant, deep connectivity advantages, demonstrated in our previous study. |
q-bio/0508013 | Jesus M. Cortes | J. M. Cortes, J. J. Torres, J. Marro, P. L. Garrido and H. J. Kappen | Effects of fast presynaptic noise in attractor neural networks | 12 pages, 6 figures. To appear in Neural Computation, 2005 | null | null | null | q-bio.NC | null | We study both analytically and numerically the effect of presynaptic noise on
the transmission of information in attractor neural networks. The noise occurs
on a very short-time scale compared to that for the neuron dynamics and it
produces short-time synaptic depression. This is inspired in recent
neurobiological findings that show that synaptic strength may either increase
or decrease on a short-time scale depending on presynaptic activity. We thus
describe a mechanism by which fast presynaptic noise enhances the neural
network sensitivity to an external stimulus. The reason for this is that, in
general, the presynaptic noise induces nonequilibrium behavior and,
consequently, the space of fixed points is qualitatively modified in such a way
that the system can easily scape from the attractor. As a result, the model
shows, in addition to pattern recognition, class identification and
categorization, which may be relevant to the understanding of some of the brain
complex tasks.
| [
{
"created": "Sat, 13 Aug 2005 10:47:38 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Cortes",
"J. M.",
""
],
[
"Torres",
"J. J.",
""
],
[
"Marro",
"J.",
""
],
[
"Garrido",
"P. L.",
""
],
[
"Kappen",
"H. J.",
""
]
] | We study both analytically and numerically the effect of presynaptic noise on the transmission of information in attractor neural networks. The noise occurs on a very short-time scale compared to that for the neuron dynamics and it produces short-time synaptic depression. This is inspired in recent neurobiological findings that show that synaptic strength may either increase or decrease on a short-time scale depending on presynaptic activity. We thus describe a mechanism by which fast presynaptic noise enhances the neural network sensitivity to an external stimulus. The reason for this is that, in general, the presynaptic noise induces nonequilibrium behavior and, consequently, the space of fixed points is qualitatively modified in such a way that the system can easily scape from the attractor. As a result, the model shows, in addition to pattern recognition, class identification and categorization, which may be relevant to the understanding of some of the brain complex tasks. |
1812.07157 | Yuncheng Du | Jeongeun Son, Dongping Du, Yuncheng Du | Stochastic Modelling and Dynamic Analysis of Cardiovascular System with
Rotary Left Ventricular Assist Devices | null | Mathematical Problems in Engineering, 2018 | null | null | q-bio.TO physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The left ventricular assist device (LVAD) has been used for end-stage heart
failure patients as a therapeutic option. The aortic valve plays a critical
role in heart failure and its treatment with LVAD. The cardiovascular-LVAD
model is often used to investigate the physiological demands required by
patients and predict the hemodynamic of the native heart supported with a LVAD.
As a bridge to recovery treatment, it is important to maintain appropriate and
active dynamics of the aortic valve and the cardiac output of the native heart,
which requires that the LVAD pump must be adjusted so that a proper balance
between the blood contributed through the aortic valve and the pump is
maintained. In this paper, our objective is to identify a critical value of the
pump power to ensure that the LVAD pump does not take over the pumping function
in the cardiovascular-pump system and share the ejected blood with left
ventricle to help the heart to recover. In addition, hemodynamic often involves
variability due to patients heterogeneity and the stochastic nature of
cardiovascular system. The variability poses significant challenges to
understand dynamic behaviors of the aortic valve and cardiac output. A
generalized polynomial chaos (gPC) expansion is used in this work to develop a
stochastic cardiovascular-pump model for efficient uncertainty propagation,
from which it is possible to rapidly calculate the variance in the aortic valve
opening duration and the cardiac output in the presence of variability. The
simulation results show that the gPC based cardiovascular-pump model is a
reliable platform that can provide useful information to understand the effect
of LVAD pump on the hemodynamic of the heart.
| [
{
"created": "Tue, 18 Dec 2018 03:49:28 GMT",
"version": "v1"
}
] | 2018-12-19 | [
[
"Son",
"Jeongeun",
""
],
[
"Du",
"Dongping",
""
],
[
"Du",
"Yuncheng",
""
]
] | The left ventricular assist device (LVAD) has been used for end-stage heart failure patients as a therapeutic option. The aortic valve plays a critical role in heart failure and its treatment with LVAD. The cardiovascular-LVAD model is often used to investigate the physiological demands required by patients and predict the hemodynamic of the native heart supported with a LVAD. As a bridge to recovery treatment, it is important to maintain appropriate and active dynamics of the aortic valve and the cardiac output of the native heart, which requires that the LVAD pump must be adjusted so that a proper balance between the blood contributed through the aortic valve and the pump is maintained. In this paper, our objective is to identify a critical value of the pump power to ensure that the LVAD pump does not take over the pumping function in the cardiovascular-pump system and share the ejected blood with left ventricle to help the heart to recover. In addition, hemodynamic often involves variability due to patients heterogeneity and the stochastic nature of cardiovascular system. The variability poses significant challenges to understand dynamic behaviors of the aortic valve and cardiac output. A generalized polynomial chaos (gPC) expansion is used in this work to develop a stochastic cardiovascular-pump model for efficient uncertainty propagation, from which it is possible to rapidly calculate the variance in the aortic valve opening duration and the cardiac output in the presence of variability. The simulation results show that the gPC based cardiovascular-pump model is a reliable platform that can provide useful information to understand the effect of LVAD pump on the hemodynamic of the heart. |
1810.01643 | Nicola Galvanetto | Nicola Galvanetto | Single-cell unroofing: probing topology and nanomechanics of native
membranes | 3 main figures, 5 supplementary figures | null | 10.1016/j.bbamem.2018.09.019 | null | q-bio.QM cond-mat.soft physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | Cell membranes separate the cell interior from the external environment. They
are constituted by a variety of lipids; their composition determines the
dynamics of membrane proteins and affects the ability of the cells to adapt.
Even though the study of model membranes allows to understand the interactions
among lipids and the overall mechanics, little is known about these properties
in native membranes. To combine topology and nanomechanics analysis of native
membranes, I designed a method to investigate the plasma membranes isolated
from a variety of single cells. Five cell types were chosen and tested,
revealing 20\% variation in membrane thickness. I probed the resistance of the
isolated membranes to indent, finding their line tension and spreading
pressure. These results show that membranes isolated from neurons are stiffer
and less diffusive than brain cancer cell membranes. This method gives direct
quantitative insights on the mechanics of native cell membranes.
| [
{
"created": "Wed, 3 Oct 2018 09:12:10 GMT",
"version": "v1"
}
] | 2018-10-04 | [
[
"Galvanetto",
"Nicola",
""
]
] | Cell membranes separate the cell interior from the external environment. They are constituted by a variety of lipids; their composition determines the dynamics of membrane proteins and affects the ability of the cells to adapt. Even though the study of model membranes allows to understand the interactions among lipids and the overall mechanics, little is known about these properties in native membranes. To combine topology and nanomechanics analysis of native membranes, I designed a method to investigate the plasma membranes isolated from a variety of single cells. Five cell types were chosen and tested, revealing 20\% variation in membrane thickness. I probed the resistance of the isolated membranes to indent, finding their line tension and spreading pressure. These results show that membranes isolated from neurons are stiffer and less diffusive than brain cancer cell membranes. This method gives direct quantitative insights on the mechanics of native cell membranes. |
2109.11352 | Neil Kelleher PhD | Richard D. LeDuc, Eric W. Deutsch, Pierre-Alain Binz, Ryan T. Fellers,
Anthony J. Cesnik, Joshua A. Klein, Tim Van Den Bossche, Ralf Gabriels,
Arshika Yalavarthi, Yasset Perez-Riverol, Jeremy Carver, Wout Bittremieux,
Shin Kawano, Benjamin Pullman, Nuno Bandeira, Neil L. Kelleher, Paul M.
Thomas, Juan Antonio Vizca\'ino | Proteomics Standards Initiatives ProForma 2.0 Unifying the encoding of
Proteoforms and Peptidoforms | null | null | null | null | q-bio.BM | http://creativecommons.org/publicdomain/zero/1.0/ | There is the need to represent in a standard manner all the possible
variations of a protein or peptide primary sequence, including both artefactual
and post-translational modifications of peptides and proteins. With that
overall aim, here, the Human Proteome Organization (HUPO) Proteomics Standards
Initiative (PSI) has developed a notation, called ProForma 2.0, which is a
substantial extension of the original ProForma notation, developed by the
Consortium for Top-Down Proteomics (CTDP). ProForma 2.0 aims to unify the
representation of proteoforms and peptidoforms. Therefore, this notation
supports use cases needed for bottom-up and middle/topdown proteomics
approaches and allows the encoding of highly modified proteins and peptides
using a human and machine-readable string. ProForma 2.0 covers encoding protein
modification names and accessions, cross-linking reagents including disulfides,
glycans, modifications encoded using mass shifts and/or via chemical formulas,
labile and C or N-terminal modifications, ambiguity in the modification
position and representation of atomic isotopes, among other use cases.
Notational conventions are based on public controlled vocabularies and
ontologies. Detailed information about the notation and existing
implementations are available at http://www.psidev.info/proforma and at the
corresponding GitHub repository (https://github.com/HUPO-PSI/proforma).
| [
{
"created": "Thu, 23 Sep 2021 12:59:09 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Mar 2022 18:16:40 GMT",
"version": "v2"
}
] | 2022-03-23 | [
[
"LeDuc",
"Richard D.",
""
],
[
"Deutsch",
"Eric W.",
""
],
[
"Binz",
"Pierre-Alain",
""
],
[
"Fellers",
"Ryan T.",
""
],
[
"Cesnik",
"Anthony J.",
""
],
[
"Klein",
"Joshua A.",
""
],
[
"Bossche",
"Tim Van Den",... | There is the need to represent in a standard manner all the possible variations of a protein or peptide primary sequence, including both artefactual and post-translational modifications of peptides and proteins. With that overall aim, here, the Human Proteome Organization (HUPO) Proteomics Standards Initiative (PSI) has developed a notation, called ProForma 2.0, which is a substantial extension of the original ProForma notation, developed by the Consortium for Top-Down Proteomics (CTDP). ProForma 2.0 aims to unify the representation of proteoforms and peptidoforms. Therefore, this notation supports use cases needed for bottom-up and middle/topdown proteomics approaches and allows the encoding of highly modified proteins and peptides using a human and machine-readable string. ProForma 2.0 covers encoding protein modification names and accessions, cross-linking reagents including disulfides, glycans, modifications encoded using mass shifts and/or via chemical formulas, labile and C or N-terminal modifications, ambiguity in the modification position and representation of atomic isotopes, among other use cases. Notational conventions are based on public controlled vocabularies and ontologies. Detailed information about the notation and existing implementations are available at http://www.psidev.info/proforma and at the corresponding GitHub repository (https://github.com/HUPO-PSI/proforma). |
2107.11553 | Korabel | Nickolay Korabel, Daniel Han, Alessandro Taloni, Gianni Pagnini,
Sergei Fedotov, Viki Allan and Thomas A. Waigh | Local Analysis of Heterogeneous Intracellular Transport: Slow and Fast
moving Endosomes | 11 pages, 6 figures | Entropy 2021, 23, 958 | 10.3390/e23080958 | null | q-bio.SC cond-mat.stat-mech | http://creativecommons.org/licenses/by/4.0/ | Trajectories of endosomes inside living eukaryotic cells are highly
heterogeneous in space and time and diffuse anomalously due to a combination of
viscoelasticity, caging, aggregation and active transport. Some of the
trajectories display switching between persistent and anti-persistent motion
while others jiggle around in one position for the whole measurement time. By
splitting the ensemble of endosome trajectories into slow moving sub-diffusive
and fast moving super-diffusive endosomes, we analyzed them separately. The
mean squared displacements and velocity auto-correlation functions confirm the
effectiveness of the splitting methods. Applying the local analysis, we show
that both ensembles are characterized by a spectrum of local anomalous
exponents and local generalized diffusion coefficients. Slow and fast endsomes
have exponential distributions of local anomalous exponents and power law
distributions of generalized diffusion coefficients. This suggests that
heterogeneous fractional Brownian motion is an appropriate model for both fast
and slow moving endosomes. This article is part of a Special Issue entitled:
"Recent Advances In Single-Particle Tracking: Experiment and Analysis" edited
by Janusz Szwabi\'nski and Aleksander Weron.
| [
{
"created": "Sat, 24 Jul 2021 07:52:18 GMT",
"version": "v1"
}
] | 2021-07-28 | [
[
"Korabel",
"Nickolay",
""
],
[
"Han",
"Daniel",
""
],
[
"Taloni",
"Alessandro",
""
],
[
"Pagnini",
"Gianni",
""
],
[
"Fedotov",
"Sergei",
""
],
[
"Allan",
"Viki",
""
],
[
"Waigh",
"Thomas A.",
""
]
] | Trajectories of endosomes inside living eukaryotic cells are highly heterogeneous in space and time and diffuse anomalously due to a combination of viscoelasticity, caging, aggregation and active transport. Some of the trajectories display switching between persistent and anti-persistent motion while others jiggle around in one position for the whole measurement time. By splitting the ensemble of endosome trajectories into slow moving sub-diffusive and fast moving super-diffusive endosomes, we analyzed them separately. The mean squared displacements and velocity auto-correlation functions confirm the effectiveness of the splitting methods. Applying the local analysis, we show that both ensembles are characterized by a spectrum of local anomalous exponents and local generalized diffusion coefficients. Slow and fast endsomes have exponential distributions of local anomalous exponents and power law distributions of generalized diffusion coefficients. This suggests that heterogeneous fractional Brownian motion is an appropriate model for both fast and slow moving endosomes. This article is part of a Special Issue entitled: "Recent Advances In Single-Particle Tracking: Experiment and Analysis" edited by Janusz Szwabi\'nski and Aleksander Weron. |
2401.11289 | Nicolas Weidberg | Carlota Muniz, Christopher McQuaid, Nicolas Weidberg | Seasonality of primary productivity affects coastal species more than
its magnitude | null | Science of the Total Environment, 757:143740, 2021 | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | While the importance of extreme conditions is recognised, patterns in species
abundances are often interpreted through average environmental conditions
within their distributional range. For marine species with pelagic larvae,
temperature and phytoplankton concentration are key variables. Along the south
coast of South Africa, conspicuous spatial patterns in recruitment rates and
the abundances of different mussel species exist, with focal areas
characterized by large populations. We studied 15 years of sea surface
temperature (SST) and chlorophyll-a (chl-a) satellite data, using spectral
analyses to partition their temporal variability over ecologically relevant
time periods, including seasonal (101 to 365 days) and intra-seasonal cycles
(20 to 100 days). Adult cover and mussel recruitment were measured at 10 sites
along the south coast and regression models showed that about 70 percent of the
variability in recruitment and adult cover was explained by seasonal
variability in chl-a, while mean annual chl-a and SST only explained 30 percent
of the recruitment, with no significant effect for adult cover. SST and chl-a
at two upwelling centres showed less predictable seasonal cycles during the
second half of the study period with a significant cooling trend during austral
autumn, coinciding with one of the mussel reproductive peaks. This likely
reflects recent changes in the Agulhas Current, the world largest western
boundary current, which affects coastal ecosystems by driving upwelling.
| [
{
"created": "Sat, 20 Jan 2024 17:50:46 GMT",
"version": "v1"
}
] | 2024-01-23 | [
[
"Muniz",
"Carlota",
""
],
[
"McQuaid",
"Christopher",
""
],
[
"Weidberg",
"Nicolas",
""
]
] | While the importance of extreme conditions is recognised, patterns in species abundances are often interpreted through average environmental conditions within their distributional range. For marine species with pelagic larvae, temperature and phytoplankton concentration are key variables. Along the south coast of South Africa, conspicuous spatial patterns in recruitment rates and the abundances of different mussel species exist, with focal areas characterized by large populations. We studied 15 years of sea surface temperature (SST) and chlorophyll-a (chl-a) satellite data, using spectral analyses to partition their temporal variability over ecologically relevant time periods, including seasonal (101 to 365 days) and intra-seasonal cycles (20 to 100 days). Adult cover and mussel recruitment were measured at 10 sites along the south coast and regression models showed that about 70 percent of the variability in recruitment and adult cover was explained by seasonal variability in chl-a, while mean annual chl-a and SST only explained 30 percent of the recruitment, with no significant effect for adult cover. SST and chl-a at two upwelling centres showed less predictable seasonal cycles during the second half of the study period with a significant cooling trend during austral autumn, coinciding with one of the mussel reproductive peaks. This likely reflects recent changes in the Agulhas Current, the world largest western boundary current, which affects coastal ecosystems by driving upwelling. |
2201.02273 | Sarwan Ali | Sarwan Ali, Babatunde Bello, Prakash Chourasia, Ria Thazhe Punathil,
Yijing Zhou, Murray Patterson | PWM2Vec: An Efficient Embedding Approach for Viral Host Specification
from Coronavirus Spike Sequences | null | null | null | null | q-bio.GN cs.LG q-bio.QM | http://creativecommons.org/publicdomain/zero/1.0/ | COVID-19 pandemic, is still unknown and is an important open question. There
are speculations that bats are a possible origin. Likewise, there are many
closely related (corona-) viruses, such as SARS, which was found to be
transmitted through civets. The study of the different hosts which can be
potential carriers and transmitters of deadly viruses to humans is crucial to
understanding, mitigating and preventing current and future pandemics. In
coronaviruses, the surface (S) protein, or spike protein, is an important part
of determining host specificity since it is the point of contact between the
virus and the host cell membrane. In this paper, we classify the hosts of over
five thousand coronaviruses from their spike protein sequences, segregating
them into clusters of distinct hosts among avians, bats, camels, swines, humans
and weasels, to name a few. We propose a feature embedding based on the
well-known position-weight matrix (PWM), which we call PWM2Vec, and use to
generate feature vectors from the spike protein sequences of these
coronaviruses. While our embedding is inspired by the success of PWMs in
biological applications such as determining protein function, or identifying
transcription factor binding sites, we are the first (to the best of our
knowledge) to use PWMs in the context of host classification from viral
sequences to generate a fixed-length feature vector representation. The results
on the real world data show that in using PWM2Vec, we are able to perform
comparably well as compared to baseline models. We also measure the importance
of different amino acids using information gain to show the amino acids which
are important for predicting the host of a given coronavirus.
| [
{
"created": "Thu, 6 Jan 2022 23:25:54 GMT",
"version": "v1"
}
] | 2022-01-10 | [
[
"Ali",
"Sarwan",
""
],
[
"Bello",
"Babatunde",
""
],
[
"Chourasia",
"Prakash",
""
],
[
"Punathil",
"Ria Thazhe",
""
],
[
"Zhou",
"Yijing",
""
],
[
"Patterson",
"Murray",
""
]
] | COVID-19 pandemic, is still unknown and is an important open question. There are speculations that bats are a possible origin. Likewise, there are many closely related (corona-) viruses, such as SARS, which was found to be transmitted through civets. The study of the different hosts which can be potential carriers and transmitters of deadly viruses to humans is crucial to understanding, mitigating and preventing current and future pandemics. In coronaviruses, the surface (S) protein, or spike protein, is an important part of determining host specificity since it is the point of contact between the virus and the host cell membrane. In this paper, we classify the hosts of over five thousand coronaviruses from their spike protein sequences, segregating them into clusters of distinct hosts among avians, bats, camels, swines, humans and weasels, to name a few. We propose a feature embedding based on the well-known position-weight matrix (PWM), which we call PWM2Vec, and use to generate feature vectors from the spike protein sequences of these coronaviruses. While our embedding is inspired by the success of PWMs in biological applications such as determining protein function, or identifying transcription factor binding sites, we are the first (to the best of our knowledge) to use PWMs in the context of host classification from viral sequences to generate a fixed-length feature vector representation. The results on the real world data show that in using PWM2Vec, we are able to perform comparably well as compared to baseline models. We also measure the importance of different amino acids using information gain to show the amino acids which are important for predicting the host of a given coronavirus. |
1610.09815 | Alexander K. Guts | Alexander K. Guts and Ludmila A. Volodchenkova | The Nash equilibrium of forest ecosystems | 4 pages | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To find the possible equilibrium states of forest ecosystems one are
suggested to use the theory of differential games. At within the 4-tier model
of mosaic forest communities it established the existence of the Nash
equilibrium states in such ecosystems
| [
{
"created": "Mon, 31 Oct 2016 08:05:31 GMT",
"version": "v1"
}
] | 2016-11-01 | [
[
"Guts",
"Alexander K.",
""
],
[
"Volodchenkova",
"Ludmila A.",
""
]
] | To find the possible equilibrium states of forest ecosystems one are suggested to use the theory of differential games. At within the 4-tier model of mosaic forest communities it established the existence of the Nash equilibrium states in such ecosystems |
2312.13414 | Maria Dzul | Maria Dzul, Charles B. Yackulic, William L. Kendall | The importance of sampling design for unbiased estimation of survival
using joint live-recapture and live resight models | 30 pages (w/o Appendix A), 8 figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Survival is a key life history parameter that can inform management decisions
and life history research. Because true survival is often confounded with
permanent and temporary emigration from the study area, many studies must
estimate apparent survival (i.e., probability of surviving and remaining inside
the study area), which can be much lower than true survival for highly mobile
species. One method for estimating true survival is the Barker joint
live-recapture/live-resight (JLRLR) model, which combines capture data from a
study area (hereafter the capture site) with resighting data from a broader
geographic area. This model assumes that live resights occur throughout the
entire area where animals can disperse to and this assumption is often not met
in practice. Here we use simulation to evaluate survival bias from a JLRLR
model under study design scenarios that differ in the site selection for
resights: global, random, fixed including the capture site, and fixed excluding
the capture site. Simulation results indicate that fixed designs that included
the capture site showed negative survival bias, whereas fixed designs that
excluded the capture site exhibited positive survival bias. The magnitude of
the bias was dependent on movement and survival, where scenarios with high
survival and frequent movement had minimal bias. In effort to help minimize
bias, we developed a multistate version of the JLRLR and demonstrated
reductions in survival bias compared to the single-state version for most
designs. Our results suggest minimizing bias can be accomplished by: 1) using a
random resight design when feasible and global sampling is not possible, 2)
using the multistate JLRLR model when appropriate, 3) including the capture
site in the resight sampling frame when possible, and 4) reporting survival as
apparent survival if fixed sites are used for resight with the single state
JLRLR model.
| [
{
"created": "Wed, 20 Dec 2023 20:34:05 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Jul 2024 21:04:58 GMT",
"version": "v2"
}
] | 2024-08-01 | [
[
"Dzul",
"Maria",
""
],
[
"Yackulic",
"Charles B.",
""
],
[
"Kendall",
"William L.",
""
]
] | Survival is a key life history parameter that can inform management decisions and life history research. Because true survival is often confounded with permanent and temporary emigration from the study area, many studies must estimate apparent survival (i.e., probability of surviving and remaining inside the study area), which can be much lower than true survival for highly mobile species. One method for estimating true survival is the Barker joint live-recapture/live-resight (JLRLR) model, which combines capture data from a study area (hereafter the capture site) with resighting data from a broader geographic area. This model assumes that live resights occur throughout the entire area where animals can disperse to and this assumption is often not met in practice. Here we use simulation to evaluate survival bias from a JLRLR model under study design scenarios that differ in the site selection for resights: global, random, fixed including the capture site, and fixed excluding the capture site. Simulation results indicate that fixed designs that included the capture site showed negative survival bias, whereas fixed designs that excluded the capture site exhibited positive survival bias. The magnitude of the bias was dependent on movement and survival, where scenarios with high survival and frequent movement had minimal bias. In effort to help minimize bias, we developed a multistate version of the JLRLR and demonstrated reductions in survival bias compared to the single-state version for most designs. Our results suggest minimizing bias can be accomplished by: 1) using a random resight design when feasible and global sampling is not possible, 2) using the multistate JLRLR model when appropriate, 3) including the capture site in the resight sampling frame when possible, and 4) reporting survival as apparent survival if fixed sites are used for resight with the single state JLRLR model. |
q-bio/0411038 | Efstratios Manousakis | Efstratios Manousakis | Collective charge excitations along cell membranes | 4 two-column pages, 3 figures | Phys. Lett. A 342, 443 (2005) | 10.1016/j.physleta.2005.05.087 | null | q-bio.SC | null | A significant part of the thin layers of counter-ions adjacent to the
exterior and interior surfaces of a cell membrane form quasi-two-dimensional
(2D) layers of mobile charge. Collective charge density oscillations, known as
plasmon modes, in these 2D charged systems of counter-ions are predicted in the
present paper. This is based on a calculation of the self-consistent response
of this system to a fast electric field fluctuation. The possibility that the
membrane channels might be using these excitations to carry out fast
communication is suggested and experiments are proposed to reveal the existence
of such excitations.
| [
{
"created": "Thu, 18 Nov 2004 21:29:30 GMT",
"version": "v1"
}
] | 2009-11-10 | [
[
"Manousakis",
"Efstratios",
""
]
] | A significant part of the thin layers of counter-ions adjacent to the exterior and interior surfaces of a cell membrane form quasi-two-dimensional (2D) layers of mobile charge. Collective charge density oscillations, known as plasmon modes, in these 2D charged systems of counter-ions are predicted in the present paper. This is based on a calculation of the self-consistent response of this system to a fast electric field fluctuation. The possibility that the membrane channels might be using these excitations to carry out fast communication is suggested and experiments are proposed to reveal the existence of such excitations. |
2407.06211 | Lisa Crossman | Styliani-Christina Fragkouli, Dhwani Solanki, Leyla J Castro, Fotis E
Psomopoulos, N\'uria Queralt-Rosinach, Davide Cirillo, Lisa C Crossman | Synthetic data: How could it be used for infectious disease research? | null | null | null | null | q-bio.OT cs.CY cs.LG | http://creativecommons.org/licenses/by/4.0/ | Over the last three to five years, it has become possible to generate machine
learning synthetic data for healthcare-related uses. However, concerns have
been raised about potential negative factors associated with the possibilities
of artificial dataset generation. These include the potential misuse of
generative artificial intelligence (AI) in fields such as cybercrime, the use
of deepfakes and fake news to deceive or manipulate, and displacement of human
jobs across various market sectors.
Here, we consider both current and future positive advances and possibilities
with synthetic datasets. Synthetic data offers significant benefits,
particularly in data privacy, research, in balancing datasets and reducing bias
in machine learning models. Generative AI is an artificial intelligence genre
capable of creating text, images, video or other data using generative models.
The recent explosion of interest in GenAI was heralded by the invention and
speedy move to use of large language models (LLM). These computational models
are able to achieve general-purpose language generation and other natural
language processing tasks and are based on transformer architectures, which
made an evolutionary leap from previous neural network architectures.
Fuelled by the advent of improved GenAI techniques and wide scale usage, this
is surely the time to consider how synthetic data can be used to advance
infectious disease research. In this commentary we aim to create an overview of
the current and future position of synthetic data in infectious disease
research.
| [
{
"created": "Wed, 3 Jul 2024 17:13:04 GMT",
"version": "v1"
}
] | 2024-07-10 | [
[
"Fragkouli",
"Styliani-Christina",
""
],
[
"Solanki",
"Dhwani",
""
],
[
"Castro",
"Leyla J",
""
],
[
"Psomopoulos",
"Fotis E",
""
],
[
"Queralt-Rosinach",
"Núria",
""
],
[
"Cirillo",
"Davide",
""
],
[
"Crossman",
... | Over the last three to five years, it has become possible to generate machine learning synthetic data for healthcare-related uses. However, concerns have been raised about potential negative factors associated with the possibilities of artificial dataset generation. These include the potential misuse of generative artificial intelligence (AI) in fields such as cybercrime, the use of deepfakes and fake news to deceive or manipulate, and displacement of human jobs across various market sectors. Here, we consider both current and future positive advances and possibilities with synthetic datasets. Synthetic data offers significant benefits, particularly in data privacy, research, in balancing datasets and reducing bias in machine learning models. Generative AI is an artificial intelligence genre capable of creating text, images, video or other data using generative models. The recent explosion of interest in GenAI was heralded by the invention and speedy move to use of large language models (LLM). These computational models are able to achieve general-purpose language generation and other natural language processing tasks and are based on transformer architectures, which made an evolutionary leap from previous neural network architectures. Fuelled by the advent of improved GenAI techniques and wide scale usage, this is surely the time to consider how synthetic data can be used to advance infectious disease research. In this commentary we aim to create an overview of the current and future position of synthetic data in infectious disease research. |
1812.04678 | Nicholas Noll | Nicholas Noll, Sebastian J. Streichan, Boris I. Shraiman | Geometry of epithelial cells provides a robust method for image based
inference of stress within tissues | 12 pages, 6 figures | Phys. Rev. X 10, 011072 (2020) | 10.1103/PhysRevX.10.011072 | null | q-bio.CB q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cellular mechanics plays an important role in epithelial morphogenesis, a
process wherein cells reshape and rearrange to produce tissue-scale
deformations. However, the study of tissue-scale mechanics is impaired by the
difficulty of direct measurement of stress in-vivo. Alternative, image-based
inference schemes aim to estimate stress from snapshots of cellular geometry
but are challenged by sensitivity to fluctuations and measurement noise as well
as the dependence on boundary conditions. Here we overcome these difficulties
by introducing a new variational approach - the Geometrical Variation Method
(GVM) - which exploits the fundamental duality between stress and cellular
geometry that exists in the state of mechanical equilibrium of discrete
mechanical networks that approximate cellular tissues. In the Geometrical
Variation Method, the two dimensional apical geometry of an epithelial tissue
is approximated by a 2D tiling with Circular Arc Polygons (CAP) in which the
arcs represent intercellular interfaces defined by the balance of local line
tension and pressure differentials between adjacent cells. We take advantage of
local constraints that mechanical equilibrium imposes on CAP geometry to define
a variational procedure that extracts the best fitting equilibrium
configuration from images of epithelial monolayers. The GVM-based stress
inference algorithm has been validated by the comparison of the predicted
cellular and mesoscopic scale stress and measured myosin II patterns in the
epithelial tissue during Drosophila embryogenesis. GVM prediction of mesoscopic
stress tensor correlates at the 80% level with the measured myosin distribution
and reveals that most of the myosin II activity is involved in a static
internal force balance within the epithelial layer. Lastly, this study provides
a practical method for non-destructive estimation of stress in live epithelial
tissues.
| [
{
"created": "Tue, 11 Dec 2018 20:29:58 GMT",
"version": "v1"
}
] | 2020-04-01 | [
[
"Noll",
"Nicholas",
""
],
[
"Streichan",
"Sebastian J.",
""
],
[
"Shraiman",
"Boris I.",
""
]
] | Cellular mechanics plays an important role in epithelial morphogenesis, a process wherein cells reshape and rearrange to produce tissue-scale deformations. However, the study of tissue-scale mechanics is impaired by the difficulty of direct measurement of stress in-vivo. Alternative, image-based inference schemes aim to estimate stress from snapshots of cellular geometry but are challenged by sensitivity to fluctuations and measurement noise as well as the dependence on boundary conditions. Here we overcome these difficulties by introducing a new variational approach - the Geometrical Variation Method (GVM) - which exploits the fundamental duality between stress and cellular geometry that exists in the state of mechanical equilibrium of discrete mechanical networks that approximate cellular tissues. In the Geometrical Variation Method, the two dimensional apical geometry of an epithelial tissue is approximated by a 2D tiling with Circular Arc Polygons (CAP) in which the arcs represent intercellular interfaces defined by the balance of local line tension and pressure differentials between adjacent cells. We take advantage of local constraints that mechanical equilibrium imposes on CAP geometry to define a variational procedure that extracts the best fitting equilibrium configuration from images of epithelial monolayers. The GVM-based stress inference algorithm has been validated by the comparison of the predicted cellular and mesoscopic scale stress and measured myosin II patterns in the epithelial tissue during Drosophila embryogenesis. GVM prediction of mesoscopic stress tensor correlates at the 80% level with the measured myosin distribution and reveals that most of the myosin II activity is involved in a static internal force balance within the epithelial layer. Lastly, this study provides a practical method for non-destructive estimation of stress in live epithelial tissues. |
1807.09194 | Bernat Corominas-Murtra BCM | Bernat Corominas-Murtra, Mart\'i S\`anchez Fibla, Sergi Valverde and
Ricard Sol\'e | Chromatic transitions in the emergence of syntax networks | 8 pages, 4 figures, 1 table | null | null | null | q-bio.NC cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The emergence of syntax during childhood is a remarkable example of how
complex correlations unfold in nonlinear ways through development. In
particular, rapid transitions seem to occur as children reach the age of two,
which seems to separate a two-word, tree-like network of syntactic relations
among words from a scale-free graphs associated to the adult, complex grammar.
Here we explore the evolution of syntax networks through language acquisition
using the {\em chromatic number}, which captures the transition and provides a
natural link to standard theories on syntactic structures. The data analysis is
compared to a null model of network growth dynamics which is shown to display
nontrivial and sensible differences. In a more general level, we observe that
the chromatic classes define independent regions of the graph, and thus, can be
interpreted as the footprints of incompatibility relations, somewhat as opposed
to modularity considerations.
| [
{
"created": "Tue, 24 Jul 2018 15:47:11 GMT",
"version": "v1"
}
] | 2018-07-25 | [
[
"Corominas-Murtra",
"Bernat",
""
],
[
"Fibla",
"Martí Sànchez",
""
],
[
"Valverde",
"Sergi",
""
],
[
"Solé",
"Ricard",
""
]
] | The emergence of syntax during childhood is a remarkable example of how complex correlations unfold in nonlinear ways through development. In particular, rapid transitions seem to occur as children reach the age of two, which seems to separate a two-word, tree-like network of syntactic relations among words from a scale-free graphs associated to the adult, complex grammar. Here we explore the evolution of syntax networks through language acquisition using the {\em chromatic number}, which captures the transition and provides a natural link to standard theories on syntactic structures. The data analysis is compared to a null model of network growth dynamics which is shown to display nontrivial and sensible differences. In a more general level, we observe that the chromatic classes define independent regions of the graph, and thus, can be interpreted as the footprints of incompatibility relations, somewhat as opposed to modularity considerations. |
1806.01778 | Gianluca Calcagni | Gianluca Calcagni, Ernesto Caballero-Garrido, Ricardo Pell\'on | Behavior stability and individual differences in Pavlovian extended
conditioning | 29 pages, 8 figures, 7 tables; v2-v3: theoretical motivation
clarified, data of Harris et al. (2015) included in improved analysis,
conclusions strengthened, typos corrected, references added, technicalities
and data analysis moved into Supplementary Material (46 pages, 22 figures, 7
tables; available at
https://www.frontiersin.org/articles/10.3389/fpsyg.2020.00612/full#supplementary-material) | Frontiers in Psychology 11 (2020) 612 | 10.3389/fpsyg.2020.00612 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How stable and general is behavior once maximum learning is reached? To
answer this question and understand post-acquisition behavior and its related
individual differences, we propose a psychological principle that naturally
extends associative models of Pavlovian conditioning to a dynamical oscillatory
model where subjects have a greater memory capacity than usually postulated,
but with greater forecast uncertainty. This results in a greater resistance to
learning in the first few sessions followed by an over-optimal response peak
and a sequence of progressively damped response oscillations. We detected the
first peak and trough of the new learning curve in our data, but their
dispersion was too large to also check the presence of oscillations with
smaller amplitude. We ran an unusually long experiment with 32 rats over 3960
trials, where we excluded habituation and other well-known phenomena as sources
of variability in the subjects' performance. Using the data of this and another
Pavlovian experiment by Harris et al. (2015), as an illustration of the
principle we tested the theory against the basic associative single-cue
Rescorla-Wagner (RW) model. We found evidence that the RW model is the best
nonlinear regression to data only for a minority of the subjects, while its
dynamical extension can explain the almost totality of data with strong to very
strong evidence. Finally, an analysis of short-scale fluctuations of individual
responses showed that they are described by random white noise, in contrast
with the colored-noise findings in human performance.
| [
{
"created": "Mon, 28 May 2018 07:22:46 GMT",
"version": "v1"
},
{
"created": "Sun, 26 Aug 2018 12:42:33 GMT",
"version": "v2"
},
{
"created": "Wed, 22 Apr 2020 08:49:51 GMT",
"version": "v3"
}
] | 2020-04-23 | [
[
"Calcagni",
"Gianluca",
""
],
[
"Caballero-Garrido",
"Ernesto",
""
],
[
"Pellón",
"Ricardo",
""
]
] | How stable and general is behavior once maximum learning is reached? To answer this question and understand post-acquisition behavior and its related individual differences, we propose a psychological principle that naturally extends associative models of Pavlovian conditioning to a dynamical oscillatory model where subjects have a greater memory capacity than usually postulated, but with greater forecast uncertainty. This results in a greater resistance to learning in the first few sessions followed by an over-optimal response peak and a sequence of progressively damped response oscillations. We detected the first peak and trough of the new learning curve in our data, but their dispersion was too large to also check the presence of oscillations with smaller amplitude. We ran an unusually long experiment with 32 rats over 3960 trials, where we excluded habituation and other well-known phenomena as sources of variability in the subjects' performance. Using the data of this and another Pavlovian experiment by Harris et al. (2015), as an illustration of the principle we tested the theory against the basic associative single-cue Rescorla-Wagner (RW) model. We found evidence that the RW model is the best nonlinear regression to data only for a minority of the subjects, while its dynamical extension can explain the almost totality of data with strong to very strong evidence. Finally, an analysis of short-scale fluctuations of individual responses showed that they are described by random white noise, in contrast with the colored-noise findings in human performance. |
1705.09863 | Andrei Khrennikov Yu | Alexey V. Melkikh, Alexey V. Melkikh and Andrei Khrennikov | Molecular recognition of the environment and mechanisms of the origin of
species in quantum-like modeling of evolution | Progress in Biophysics and Molecular Biology, 2017 | Progress in Biophysics and Molecular Biology 130, Part A, 61-79
(2017) | null | null | q-bio.PE quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A review of the mechanisms of speciation is performed. The mechanisms of the
evolution of species, taking into account the feedback of the state of the
environment and mechanisms of the emergence of complexity, are considered. It
is shown that these mechanisms, at the molecular level, cannot work steadily in
terms of classical mechanics. Quantum mechanisms of changes in the genome,
based on the long-range interaction potential between biologically important
molecules, are proposed as one of possible explanation. Different variants of
interactions of the organism and environment based on molecular recognition and
leading to new species origins are considered. Experiments to verify the model
are proposed. This bio-physical study is completed by the general operational
model of based on quantum information theory. The latter is applied to model
epigenetic evolution.
| [
{
"created": "Sat, 27 May 2017 20:42:00 GMT",
"version": "v1"
}
] | 2018-07-18 | [
[
"Melkikh",
"Alexey V.",
""
],
[
"Melkikh",
"Alexey V.",
""
],
[
"Khrennikov",
"Andrei",
""
]
] | A review of the mechanisms of speciation is performed. The mechanisms of the evolution of species, taking into account the feedback of the state of the environment and mechanisms of the emergence of complexity, are considered. It is shown that these mechanisms, at the molecular level, cannot work steadily in terms of classical mechanics. Quantum mechanisms of changes in the genome, based on the long-range interaction potential between biologically important molecules, are proposed as one of possible explanation. Different variants of interactions of the organism and environment based on molecular recognition and leading to new species origins are considered. Experiments to verify the model are proposed. This bio-physical study is completed by the general operational model of based on quantum information theory. The latter is applied to model epigenetic evolution. |
2402.14887 | Tuobang Li | Tuobang Li | Infer metabolic directions and magnitudes from moment differences of
mass-weighted intensity distributions | null | null | null | null | q-bio.QM q-bio.BM q-bio.CB q-bio.MN q-bio.SC | http://creativecommons.org/licenses/by/4.0/ | Metabolic pathways are fundamental maps in biochemistry that detail how
molecules are transformed through various reactions. Metabolomics refers to the
large-scale study of small molecules. High-throughput, untargeted, mass
spectrometry-based metabolomics experiments typically depend on libraries for
structural annotation, which is necessary for pathway analysis. However, only a
small fraction of spectra can be matched to known structures in these libraries
and only a portion of annotated metabolites can be associated with specific
pathways, considering that numerous pathways are yet to be discovered. The
complexity of metabolic pathways, where a single compound can play a part in
multiple pathways, poses an additional challenge. This study introduces a
different concept: mass-weighted intensity distribution, which is the empirical
distribution of the intensities times their associated m/z values. Analysis of
COVID-19 and mouse brain datasets shows that by estimating the differences of
the point estimations of these distributions, it becomes possible to infer the
metabolic directions and magnitudes without requiring knowledge of the exact
chemical structures of these compounds and their related pathways. The overall
metabolic momentum map, named as momentome, has the potential to bypass the
current bottleneck and provide fresh insights into metabolomics studies. This
brief report thus provides a mathematical framing for a classic biological
concept.
| [
{
"created": "Thu, 22 Feb 2024 08:32:31 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Feb 2024 15:18:03 GMT",
"version": "v2"
}
] | 2024-02-29 | [
[
"Li",
"Tuobang",
""
]
] | Metabolic pathways are fundamental maps in biochemistry that detail how molecules are transformed through various reactions. Metabolomics refers to the large-scale study of small molecules. High-throughput, untargeted, mass spectrometry-based metabolomics experiments typically depend on libraries for structural annotation, which is necessary for pathway analysis. However, only a small fraction of spectra can be matched to known structures in these libraries and only a portion of annotated metabolites can be associated with specific pathways, considering that numerous pathways are yet to be discovered. The complexity of metabolic pathways, where a single compound can play a part in multiple pathways, poses an additional challenge. This study introduces a different concept: mass-weighted intensity distribution, which is the empirical distribution of the intensities times their associated m/z values. Analysis of COVID-19 and mouse brain datasets shows that by estimating the differences of the point estimations of these distributions, it becomes possible to infer the metabolic directions and magnitudes without requiring knowledge of the exact chemical structures of these compounds and their related pathways. The overall metabolic momentum map, named as momentome, has the potential to bypass the current bottleneck and provide fresh insights into metabolomics studies. This brief report thus provides a mathematical framing for a classic biological concept. |
1001.5309 | Wojciech Waga | Dorota Mackiewicz, Marta Zawierta, Wojciech Waga, Stanislaw Cebrat | Genome analyses and modelling the relationship between coding density,
recombination rate and chromosome length | 26 pages, 7 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the human genomes, recombination frequency between homologous chromosomes
during meiosis is highly correlated with their physical length while it differs
significantly when their coding density is considered. Furthermore, it has been
observed that the recombination events are distributed unevenly along the
chromosomes. We have found that many of such recombination properties can be
predicted by computer simulations of population evolution based on the Monte
Carlo methods. For example, these simulations have shown that the probability
of acceptance of the recombination events by selection is higher at the ends of
chromosomes and lower in their middle parts. The regions of high coding density
are more prone to enter the strategy of haplotype complementation and to form
clusters of genes which are "recombination deserts". The phenomenon of
switching in-between the purifying selection and haplotype complementation has
a phase transition character, and many relations between the effective
population size, coding density, chromosome size and recombination frequency
are those of the power law type.
| [
{
"created": "Fri, 29 Jan 2010 15:05:47 GMT",
"version": "v1"
}
] | 2010-02-01 | [
[
"Mackiewicz",
"Dorota",
""
],
[
"Zawierta",
"Marta",
""
],
[
"Waga",
"Wojciech",
""
],
[
"Cebrat",
"Stanislaw",
""
]
] | In the human genomes, recombination frequency between homologous chromosomes during meiosis is highly correlated with their physical length while it differs significantly when their coding density is considered. Furthermore, it has been observed that the recombination events are distributed unevenly along the chromosomes. We have found that many of such recombination properties can be predicted by computer simulations of population evolution based on the Monte Carlo methods. For example, these simulations have shown that the probability of acceptance of the recombination events by selection is higher at the ends of chromosomes and lower in their middle parts. The regions of high coding density are more prone to enter the strategy of haplotype complementation and to form clusters of genes which are "recombination deserts". The phenomenon of switching in-between the purifying selection and haplotype complementation has a phase transition character, and many relations between the effective population size, coding density, chromosome size and recombination frequency are those of the power law type. |
1603.07759 | Eugen Tarnow | Eugen Tarnow | Preliminary Evidence -- Diagnosed Alzheimer's Disease But Not MCI
Affects Working Memory Capacity - 0.7 of 2.7 Memory Slots is Lost | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently it was shown explicitly that free recall consists of two stages: the
first few recalls empty working memory (narrowly defined) and a second stage, a
reactivation stage, concludes the recall (Tarnow, 2015). It was also shown that
the serial position curve changes in mild Alzheimer's disease, lowered total
recall and lessened primacy, are similar to second stage recall and different
from recall from working memory. The Tarnow Unchunkable Test (TUT, Tarnow,
2013) uses double integer items to separate out only the first stage, the
emptying of working memory, by making it difficult to reactivate items due to
the lack of intra-item relationships. Here it is shown that subject TUT selects
out diagnosed Alzheimer's Disease but not MCI. On average, diagnosed
Alzheimer's Disease is correlated with a loss of 0.7 memory slots (out of an
average of 2.7 slots). The identification of a lost memory slot may have
implications for improved stage definitions of Alzheimer's disease and for
remediation therapy via working memory capacity management. In conjunction with
the Alzheimer's disease process map, it may also be useful to identify the
exact location of working memory.
| [
{
"created": "Thu, 24 Mar 2016 21:15:03 GMT",
"version": "v1"
}
] | 2016-03-28 | [
[
"Tarnow",
"Eugen",
""
]
] | Recently it was shown explicitly that free recall consists of two stages: the first few recalls empty working memory (narrowly defined) and a second stage, a reactivation stage, concludes the recall (Tarnow, 2015). It was also shown that the serial position curve changes in mild Alzheimer's disease, lowered total recall and lessened primacy, are similar to second stage recall and different from recall from working memory. The Tarnow Unchunkable Test (TUT, Tarnow, 2013) uses double integer items to separate out only the first stage, the emptying of working memory, by making it difficult to reactivate items due to the lack of intra-item relationships. Here it is shown that subject TUT selects out diagnosed Alzheimer's Disease but not MCI. On average, diagnosed Alzheimer's Disease is correlated with a loss of 0.7 memory slots (out of an average of 2.7 slots). The identification of a lost memory slot may have implications for improved stage definitions of Alzheimer's disease and for remediation therapy via working memory capacity management. In conjunction with the Alzheimer's disease process map, it may also be useful to identify the exact location of working memory. |
1204.1094 | Donald Cooper Ph.D. | Juan A. Varela, Jungang Wang, Andrew L. Varnell and Donald C. Cooper | Control over stress induces plasticity of individual prefrontal cortical
neurons: A conductance-based neural simulation | 2 pages 2 figures Nature Precedings
<http://dx.doi.org/10.1038/npre.2011.6267.1> (2011) | null | 10.1038/npre.2011.6267.1 | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Behavioral control over stressful stimuli induces resilience to future
conditions when control is lacking. The medial prefrontal cortex(mPFC) is a
critically important brain region required for plasticity of stress resilience.
We found that control over stress induces plasticity of the intrinsic
voltage-gated conductances of pyramidal neurons in the PFC. To gain insight
into the underlying biophysical mechanisms of this plasticity we used the
conductance- based neural simulation software tool, NEURON, to model the
increase in membrane excitability associated with resilience to stress. A ball
and stick multicompartment conductance-based model was used to realistically
fit passive and active data traces from prototypical pyramidal neurons in
neurons in rats with control over tail shock stress and those lacking control.
The results indicate that the plasticity of membrane excitability associated
with control over stress can be attributed to an increase in Na+ and Ca2+
T-type conductances and an increase in the leak conductance. Using simulated
dendritic synaptic inputs we observed an increase in excitatory postsynaptic
summation and amplification resulting in elevated action potential output. This
realistic simulation suggests that control over stress enhances the output of
the PFC and offers specific testable hypotheses to guide future
electrophysiological mechanistic studies in animal models of resilience and
vulnerability to stress.
| [
{
"created": "Wed, 4 Apr 2012 23:38:57 GMT",
"version": "v1"
}
] | 2012-04-06 | [
[
"Varela",
"Juan A.",
""
],
[
"Wang",
"Jungang",
""
],
[
"Varnell",
"Andrew L.",
""
],
[
"Cooper",
"Donald C.",
""
]
] | Behavioral control over stressful stimuli induces resilience to future conditions when control is lacking. The medial prefrontal cortex(mPFC) is a critically important brain region required for plasticity of stress resilience. We found that control over stress induces plasticity of the intrinsic voltage-gated conductances of pyramidal neurons in the PFC. To gain insight into the underlying biophysical mechanisms of this plasticity we used the conductance- based neural simulation software tool, NEURON, to model the increase in membrane excitability associated with resilience to stress. A ball and stick multicompartment conductance-based model was used to realistically fit passive and active data traces from prototypical pyramidal neurons in neurons in rats with control over tail shock stress and those lacking control. The results indicate that the plasticity of membrane excitability associated with control over stress can be attributed to an increase in Na+ and Ca2+ T-type conductances and an increase in the leak conductance. Using simulated dendritic synaptic inputs we observed an increase in excitatory postsynaptic summation and amplification resulting in elevated action potential output. This realistic simulation suggests that control over stress enhances the output of the PFC and offers specific testable hypotheses to guide future electrophysiological mechanistic studies in animal models of resilience and vulnerability to stress. |
2108.10231 | Peer Herholz | Peer Herholz, Eddy Fortier, Mariya Toneva, Nicolas Farrugia, Leila
Wehbe, Valentina Borghesani | A roadmap to reverse engineering real-world generalization by combining
naturalistic paradigms, deep sampling, and predictive computational models | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Real-world generalization, e.g., deciding to approach a never-seen-before
animal, relies on contextual information as well as previous experiences. Such
a seemingly easy behavioral choice requires the interplay of multiple neural
mechanisms, from integrative encoding to category-based inference, weighted
differently according to the circumstances. Here, we argue that a comprehensive
theory of the neuro-cognitive substrates of real-world generalization will
greatly benefit from empirical research with three key elements. First, the
ecological validity provided by multimodal, naturalistic paradigms. Second, the
model stability afforded by deep sampling. Finally, the statistical rigor
granted by predictive modeling and computational controls.
| [
{
"created": "Mon, 23 Aug 2021 15:08:52 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Jan 2022 08:45:56 GMT",
"version": "v2"
}
] | 2022-01-17 | [
[
"Herholz",
"Peer",
""
],
[
"Fortier",
"Eddy",
""
],
[
"Toneva",
"Mariya",
""
],
[
"Farrugia",
"Nicolas",
""
],
[
"Wehbe",
"Leila",
""
],
[
"Borghesani",
"Valentina",
""
]
] | Real-world generalization, e.g., deciding to approach a never-seen-before animal, relies on contextual information as well as previous experiences. Such a seemingly easy behavioral choice requires the interplay of multiple neural mechanisms, from integrative encoding to category-based inference, weighted differently according to the circumstances. Here, we argue that a comprehensive theory of the neuro-cognitive substrates of real-world generalization will greatly benefit from empirical research with three key elements. First, the ecological validity provided by multimodal, naturalistic paradigms. Second, the model stability afforded by deep sampling. Finally, the statistical rigor granted by predictive modeling and computational controls. |
2007.06064 | Alexander B. Kukushkin | A.B. Kukushkin, A.A. Kulichenko, A.V. Sokolov | Optimization identification of superdiffusion processes in biology: an
algorithm for processing observational data and a self-similar solution of
the kinetic equation | 20 pages, 13 figures | null | null | null | q-bio.QM q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work is an attempt to transfer to biology the methods developed in
physics for formulating and solving the kinetic equations in which the kernel
of the integral operator in spatial coordinates is slowly decreasing with
increasing distance and belongs to the class of Levy distributions. An
algorithm is proposed for the reconstruction of the step-length probability
density function (PDF) on a moderate number of trajectories of biological
objects (migrants) and for the derivation of the Green's function of the
corresponding integro-differential kinetic equation for the density of migrants
in the entire space-time range, including the construction of an approximate
self-similar solution. A wide class of time-dependent superdiffusion processes
with a model power-law step-length PDF is considered, which corresponds to
"Levy walks with rests" for given values of the migrant's constant velocity and
the average time T of the migrant's stay between runs. The algorithm is tested
within the framework of a synthetic diagnostics, consisting in the generation
of artificial experimental data for trajectories of migrants and the subsequent
reconstruction of the parameters of the step-length PDF and T. For different
volumes of synthetic data, to obtain a general idea of the distributions under
study (non-parametric case) and to evaluate the accuracy of recovering the
parameters of the PDF (in the case of a parametric representation), the method
of balanced identification is used. The approximate self-similar solution for
the parameters of step-length PDF and T is shown to provide reasonable accuracy
of the space-time evolution of migrant's density.
| [
{
"created": "Sun, 12 Jul 2020 18:43:00 GMT",
"version": "v1"
}
] | 2020-07-14 | [
[
"Kukushkin",
"A. B.",
""
],
[
"Kulichenko",
"A. A.",
""
],
[
"Sokolov",
"A. V.",
""
]
] | This work is an attempt to transfer to biology the methods developed in physics for formulating and solving the kinetic equations in which the kernel of the integral operator in spatial coordinates is slowly decreasing with increasing distance and belongs to the class of Levy distributions. An algorithm is proposed for the reconstruction of the step-length probability density function (PDF) on a moderate number of trajectories of biological objects (migrants) and for the derivation of the Green's function of the corresponding integro-differential kinetic equation for the density of migrants in the entire space-time range, including the construction of an approximate self-similar solution. A wide class of time-dependent superdiffusion processes with a model power-law step-length PDF is considered, which corresponds to "Levy walks with rests" for given values of the migrant's constant velocity and the average time T of the migrant's stay between runs. The algorithm is tested within the framework of a synthetic diagnostics, consisting in the generation of artificial experimental data for trajectories of migrants and the subsequent reconstruction of the parameters of the step-length PDF and T. For different volumes of synthetic data, to obtain a general idea of the distributions under study (non-parametric case) and to evaluate the accuracy of recovering the parameters of the PDF (in the case of a parametric representation), the method of balanced identification is used. The approximate self-similar solution for the parameters of step-length PDF and T is shown to provide reasonable accuracy of the space-time evolution of migrant's density. |
1312.5565 | Eric Werner | Eric Werner | What Transcription Factors Can't Do: On the Combinatorial Limits of Gene
Regulatory Networks | 12 pages, A modified version with grammatical corrections and
clarifications. Key Words: Addressing systems, transcription factors, gene
regulatory networks, control entropy, genome control architecture,
developmental control networks, CENES, CENOME, interpretive-executive system,
multicellular development, embryogenesis, evolution, Cambrian Explosion,
computational multicellular modeling | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A proof is presented that gene regulatory networks (GRNs) based solely on
transcription factors cannot control the development of complex multicellular
life. GRNs alone cannot explain the evolution of multicellular life in the
Cambrian Explosion. Networks are based on addressing systems which are used to
construct network links. The more complex the network the greater the number of
links and the larger the required address space. It has been assumed that
combinations of transcription factors generate a large enough address space to
form GRNs that are complex enough to control the development of complex
multicellular life. However, it is shown in this article that transcription
factors do not have sufficient combinatorial power to serve as the basis of an
addressing system for regulatory control of genomes in the development of
complex organisms. It is proven that given $n$ transcription factor genes in a
genome and address combinations of length $k$ then there are at most $n/k$
k-length transcription factor addresses in the address space. The complexity of
embryonic development requires a corresponding complexity of control
information in the cell and its genome. Therefore, a different addressing
system must exist to form the complex control networks required for complex
control systems. It is postulated that a new type of network evolved based on
an RNA-DNA addressing system that utilized and subsumed the extant GRNs. These
new developmental control networks are called CENES (for Control genes). The
evolution of these new higher networks would explain how the Cambrian Explosion
was possible. The architecture of these higher level networks may in fact be
universal (modulo syntax) in the genomes of all multicellular life.
| [
{
"created": "Thu, 19 Dec 2013 14:39:18 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Jan 2014 13:27:50 GMT",
"version": "v2"
}
] | 2014-01-28 | [
[
"Werner",
"Eric",
""
]
] | A proof is presented that gene regulatory networks (GRNs) based solely on transcription factors cannot control the development of complex multicellular life. GRNs alone cannot explain the evolution of multicellular life in the Cambrian Explosion. Networks are based on addressing systems which are used to construct network links. The more complex the network the greater the number of links and the larger the required address space. It has been assumed that combinations of transcription factors generate a large enough address space to form GRNs that are complex enough to control the development of complex multicellular life. However, it is shown in this article that transcription factors do not have sufficient combinatorial power to serve as the basis of an addressing system for regulatory control of genomes in the development of complex organisms. It is proven that given $n$ transcription factor genes in a genome and address combinations of length $k$ then there are at most $n/k$ k-length transcription factor addresses in the address space. The complexity of embryonic development requires a corresponding complexity of control information in the cell and its genome. Therefore, a different addressing system must exist to form the complex control networks required for complex control systems. It is postulated that a new type of network evolved based on an RNA-DNA addressing system that utilized and subsumed the extant GRNs. These new developmental control networks are called CENES (for Control genes). The evolution of these new higher networks would explain how the Cambrian Explosion was possible. The architecture of these higher level networks may in fact be universal (modulo syntax) in the genomes of all multicellular life. |
2009.02217 | Peter Gawthrop | Peter J. Gawthrop and Michael Pan | Network Thermodynamical Modelling of Bioelectrical Systems: A Bond Graph
Approach | null | null | 10.1089/bioe.2020.0042 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Interactions between biomolecules, electrons and protons are essential to
many fundamental processes sustaining life. It is therefore of interest to
build mathematical models of these bioelectrical processes not only to enhance
understanding but also to enable computer models to complement in vitro and in
vivo experiments.Such models can never be entirely accurate; it is nevertheless
important that the models are compatible with physical principles. Network
Thermodynamics, as implemented with bond graphs, provide one approach to
creating physically compatible mathematical models of bioelectrical systems.
This is illustrated using simple models of ion channels, redox reactions,
proton pumps and electrogenic membrane transporters thus demonstrating that the
approach can be used to build mathematical and computer models of a wide range
of bioelectrical systems.
| [
{
"created": "Fri, 4 Sep 2020 14:21:18 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Oct 2020 12:20:27 GMT",
"version": "v2"
}
] | 2020-12-08 | [
[
"Gawthrop",
"Peter J.",
""
],
[
"Pan",
"Michael",
""
]
] | Interactions between biomolecules, electrons and protons are essential to many fundamental processes sustaining life. It is therefore of interest to build mathematical models of these bioelectrical processes not only to enhance understanding but also to enable computer models to complement in vitro and in vivo experiments.Such models can never be entirely accurate; it is nevertheless important that the models are compatible with physical principles. Network Thermodynamics, as implemented with bond graphs, provide one approach to creating physically compatible mathematical models of bioelectrical systems. This is illustrated using simple models of ion channels, redox reactions, proton pumps and electrogenic membrane transporters thus demonstrating that the approach can be used to build mathematical and computer models of a wide range of bioelectrical systems. |
2312.13302 | Arthur Leroy | Arthur Leroy, Ai Ling Teh, Frank Dondelinger, Mauricio A. Alvarez,
Dennis Wang | Longitudinal prediction of DNA methylation to forecast epigenetic
outcomes | 18 pages, 12 figures, 3 tables | null | null | null | q-bio.GN cs.LG stat.AP | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Interrogating the evolution of biological changes at early stages of life
requires longitudinal profiling of molecules, such as DNA methylation, which
can be challenging with children. We introduce a probabilistic and longitudinal
machine learning framework based on multi-mean Gaussian processes (GPs),
accounting for individual and gene correlations across time. This method
provides future predictions of DNA methylation status at different individual
ages while accounting for uncertainty. Our model is trained on a birth cohort
of children with methylation profiled at ages 0-4, and we demonstrated that the
status of methylation sites for each child can be accurately predicted at ages
5-7. We show that methylation profiles predicted by multi-mean GPs can be used
to estimate other phenotypes, such as epigenetic age, and enable comparison to
other health measures of interest. This approach encourages epigenetic studies
to move towards longitudinal design for investigating epigenetic changes during
development, ageing and disease progression.
| [
{
"created": "Tue, 19 Dec 2023 22:15:27 GMT",
"version": "v1"
}
] | 2023-12-22 | [
[
"Leroy",
"Arthur",
""
],
[
"Teh",
"Ai Ling",
""
],
[
"Dondelinger",
"Frank",
""
],
[
"Alvarez",
"Mauricio A.",
""
],
[
"Wang",
"Dennis",
""
]
] | Interrogating the evolution of biological changes at early stages of life requires longitudinal profiling of molecules, such as DNA methylation, which can be challenging with children. We introduce a probabilistic and longitudinal machine learning framework based on multi-mean Gaussian processes (GPs), accounting for individual and gene correlations across time. This method provides future predictions of DNA methylation status at different individual ages while accounting for uncertainty. Our model is trained on a birth cohort of children with methylation profiled at ages 0-4, and we demonstrated that the status of methylation sites for each child can be accurately predicted at ages 5-7. We show that methylation profiles predicted by multi-mean GPs can be used to estimate other phenotypes, such as epigenetic age, and enable comparison to other health measures of interest. This approach encourages epigenetic studies to move towards longitudinal design for investigating epigenetic changes during development, ageing and disease progression. |
1709.06824 | Tomas Van Pottelbergh | Tomas Van Pottelbergh, Guillaume Drion, Rodolphe Sepulchre | Robust modulation of integrate-and-fire models | This is the authors' final version. The article has been accepted for
publication in Neural Computation | Neural Computation 30:4 (2018) 987-1011 | 10.1162/neco_a_01065 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | By controlling the state of neuronal populations, neuromodulators ultimately
affect behaviour. A key neuromodulation mechanism is the alteration of neuronal
excitability via the modulation of ion channel expression. This type of
neuromodulation is normally studied via conductance-based models, but those
models are computationally challenging for large-scale network simulations
needed in population studies. This paper studies the modulation properties of
the Multi-Quadratic Integrate-and-Fire (MQIF) model, a generalisation of the
classical Quadratic Integrate-and-Fire (QIF) model. The model is shown to
combine the computational economy of integrate-and-fire modelling and the
physiological interpretability of conductance-based modelling. It is therefore
a good candidate for affordable computational studies of neuromodulation in
large networks.
| [
{
"created": "Wed, 20 Sep 2017 11:57:15 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Oct 2017 15:38:07 GMT",
"version": "v2"
},
{
"created": "Thu, 16 Nov 2017 13:26:40 GMT",
"version": "v3"
},
{
"created": "Tue, 5 Dec 2017 15:47:30 GMT",
"version": "v4"
}
] | 2020-09-29 | [
[
"Van Pottelbergh",
"Tomas",
""
],
[
"Drion",
"Guillaume",
""
],
[
"Sepulchre",
"Rodolphe",
""
]
] | By controlling the state of neuronal populations, neuromodulators ultimately affect behaviour. A key neuromodulation mechanism is the alteration of neuronal excitability via the modulation of ion channel expression. This type of neuromodulation is normally studied via conductance-based models, but those models are computationally challenging for large-scale network simulations needed in population studies. This paper studies the modulation properties of the Multi-Quadratic Integrate-and-Fire (MQIF) model, a generalisation of the classical Quadratic Integrate-and-Fire (QIF) model. The model is shown to combine the computational economy of integrate-and-fire modelling and the physiological interpretability of conductance-based modelling. It is therefore a good candidate for affordable computational studies of neuromodulation in large networks. |
1401.7567 | Philipp Germann | Dagmar Iber and Philipp Germann | How do digits emerge? - Mathematical Models of Limb Development | Review | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The mechanism that controls digit formation has long intrigued developmental
and theoretical biologists, and many different models and mechanisms have been
proposed. Here we review models of limb development with a specific focus on
digit and long bone formation. Decades of experiments have revealed the basic
signalling circuits that control limb development, and recent advances in
imaging and molecular technologies provide us with unprecedented spatial detail
and a broader view on the regulatory networks. Computational approaches are
important to integrate the available information into a consistent framework
that will allow us to achieve a deeper level of understanding and that will
help with the future planning and interpretation of complex experiments, paving
the way to in silico genetics. Previous models of development had to be focused
on very few, simple regulatory interactions. Algorithmic developments and
increasing computing power now enable the generation and validation of
increasingly realistic models that can be used to test old theories and uncover
new mechanisms.
| [
{
"created": "Wed, 29 Jan 2014 16:06:11 GMT",
"version": "v1"
}
] | 2014-01-30 | [
[
"Iber",
"Dagmar",
""
],
[
"Germann",
"Philipp",
""
]
] | The mechanism that controls digit formation has long intrigued developmental and theoretical biologists, and many different models and mechanisms have been proposed. Here we review models of limb development with a specific focus on digit and long bone formation. Decades of experiments have revealed the basic signalling circuits that control limb development, and recent advances in imaging and molecular technologies provide us with unprecedented spatial detail and a broader view on the regulatory networks. Computational approaches are important to integrate the available information into a consistent framework that will allow us to achieve a deeper level of understanding and that will help with the future planning and interpretation of complex experiments, paving the way to in silico genetics. Previous models of development had to be focused on very few, simple regulatory interactions. Algorithmic developments and increasing computing power now enable the generation and validation of increasingly realistic models that can be used to test old theories and uncover new mechanisms. |
0804.0190 | Usha Devi A. R. | Ramakrishna Chakravarthi, A. K. Rajagopal and A. R. Usha Devi | Quantum Mechanical Basis of Vision | 5 pages, no figures; submitted for publication in the Proceedings of
the India-US Workshop on Science and Technology at the Nabo-Bio Interface,
Bhubaneswar, India held during February 19-22, 2008 | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The two striking components of retina, i.e., the light sensitive neural layer
in the eye, by which it responds to light are (the three types of) color
sensitive Cones and color insensitive Rods (which outnumber the cones 20:1).
The interaction between electromagnetic radiation and these photoreceptors
(causing transitions between cis- and trans- states of rhodopsin molecules in
the latter) offers a prime example of physical processes at the nano-bio
interface. After a brief review of the basic facts about vision, we propose a
quantum mechanical model (paralleling the Jaynes-Cummings model (JCM) of
interaction of light with matter) of early vision describing the interaction of
light with the two states of rhodopsin mentioned above. Here we model the early
essential steps in vision incorporating, separately, the two well-known
features of retinal transduction (converting light to neural signals): small
numbers of cones respond to bright light (large number of photons) and large
numbers of rods respond to faint light (small number of photons) with an
amplification scheme. An outline of the method of solution of these respective
models based on quantum density matrix is also indicated. This includes a brief
overview of the theory, based on JCM, of signal amplification required for the
perception of faint light. We envision this methodology, which brings a novel
quantum approach to modeling neural activity, to be a useful paradigm in
developing a better understanding of key visual processes than is possible with
currently available models that completely ignore quantum effects at the
relevant neural level.
| [
{
"created": "Tue, 1 Apr 2008 14:55:34 GMT",
"version": "v1"
}
] | 2008-04-02 | [
[
"Chakravarthi",
"Ramakrishna",
""
],
[
"Rajagopal",
"A. K.",
""
],
[
"Devi",
"A. R. Usha",
""
]
] | The two striking components of retina, i.e., the light sensitive neural layer in the eye, by which it responds to light are (the three types of) color sensitive Cones and color insensitive Rods (which outnumber the cones 20:1). The interaction between electromagnetic radiation and these photoreceptors (causing transitions between cis- and trans- states of rhodopsin molecules in the latter) offers a prime example of physical processes at the nano-bio interface. After a brief review of the basic facts about vision, we propose a quantum mechanical model (paralleling the Jaynes-Cummings model (JCM) of interaction of light with matter) of early vision describing the interaction of light with the two states of rhodopsin mentioned above. Here we model the early essential steps in vision incorporating, separately, the two well-known features of retinal transduction (converting light to neural signals): small numbers of cones respond to bright light (large number of photons) and large numbers of rods respond to faint light (small number of photons) with an amplification scheme. An outline of the method of solution of these respective models based on quantum density matrix is also indicated. This includes a brief overview of the theory, based on JCM, of signal amplification required for the perception of faint light. We envision this methodology, which brings a novel quantum approach to modeling neural activity, to be a useful paradigm in developing a better understanding of key visual processes than is possible with currently available models that completely ignore quantum effects at the relevant neural level. |
1611.04668 | Jacob Aguilar | Jacob B. Aguilar, Juan B. Gutierrez | An Epidemiological Model of Malaria Accounting for Asymptomatic Carriers | null | null | null | null | q-bio.PE math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Asymptomatic individuals in the context of malarial disease refers to
subjects who carry a parasite load but do not show clinical symptoms. A correct
understanding of the influence of asymptomatic individuals on transmission
dynamics will provide a comprehensive description of the complex interplay
between the definitive host (female \textit{Anopheles} mosquito), intermediate
host (human) and agent (\textit{Plasmodium} parasite). The goal of this article
is to conduct a rigorous mathematical analysis of a new compartmentalized
malaria model accounting for asymptomatic human hosts for the purpose of
calculating the basic reproductive number ($\mathcal{R}_0$), and determining
the bifurcations that might occur at the onset of disease free equilibrium. A
point of departure of this model from others appearing in literature is that
the asymptomatic compartment is decomposed into two mutually disjoint
sub-compartments by making use of the naturally acquired immunity (NAI) of the
population under consideration. After deriving the model, a qualitative
analysis is carried out to classify the stability of the equilibria of the
system. Our results show that the dynamical system is locally asymptotically
stable provided that $\mathcal{R}_0<1$. However this stability is not global,
owning to the occurrence of a sub-critical bifurcation in which additional
non-trivial sub-threshold equilibrium solutions appear in response to a
specified parameter being perturbed. To ensure that the model does not undergo
a backward bifurcation, we demand that an auxiliary parameter denoted
$\Lambda<1$ in addition to the threshold constraint $\mathcal{R}_0<1$. The
authors hope that this qualitative analysis will fill in the gaps of what is
currently known about asymptomatic malaria and aid in designing strategies that
assist the further development of malaria control and eradication efforts.
| [
{
"created": "Tue, 15 Nov 2016 01:43:48 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Feb 2017 04:05:18 GMT",
"version": "v2"
}
] | 2017-02-20 | [
[
"Aguilar",
"Jacob B.",
""
],
[
"Gutierrez",
"Juan B.",
""
]
] | Asymptomatic individuals in the context of malarial disease refers to subjects who carry a parasite load but do not show clinical symptoms. A correct understanding of the influence of asymptomatic individuals on transmission dynamics will provide a comprehensive description of the complex interplay between the definitive host (female \textit{Anopheles} mosquito), intermediate host (human) and agent (\textit{Plasmodium} parasite). The goal of this article is to conduct a rigorous mathematical analysis of a new compartmentalized malaria model accounting for asymptomatic human hosts for the purpose of calculating the basic reproductive number ($\mathcal{R}_0$), and determining the bifurcations that might occur at the onset of disease free equilibrium. A point of departure of this model from others appearing in literature is that the asymptomatic compartment is decomposed into two mutually disjoint sub-compartments by making use of the naturally acquired immunity (NAI) of the population under consideration. After deriving the model, a qualitative analysis is carried out to classify the stability of the equilibria of the system. Our results show that the dynamical system is locally asymptotically stable provided that $\mathcal{R}_0<1$. However this stability is not global, owning to the occurrence of a sub-critical bifurcation in which additional non-trivial sub-threshold equilibrium solutions appear in response to a specified parameter being perturbed. To ensure that the model does not undergo a backward bifurcation, we demand that an auxiliary parameter denoted $\Lambda<1$ in addition to the threshold constraint $\mathcal{R}_0<1$. The authors hope that this qualitative analysis will fill in the gaps of what is currently known about asymptomatic malaria and aid in designing strategies that assist the further development of malaria control and eradication efforts. |
2009.01216 | Claus Kadelka | Claus Kadelka, Taras-Michael Butrie, Evan Hilton, Jack Kinseth,
Addison Schmidt, Haris Serdarevic | A meta-analysis of Boolean network models reveals design principles of
gene regulatory networks | 51 pages, 19 figures, 2 tables | Science Advances 10.2 (2024): eadj0822 | 10.1126/sciadv.adj0822 | null | q-bio.MN math.DS nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gene regulatory networks (GRNs) play a central role in cellular
decision-making. Understanding their structure and how it impacts their
dynamics constitutes thus a fundamental biological question. GRNs are
frequently modeled as Boolean networks, which are intuitive, simple to
describe, and can yield qualitative results even when data is sparse. We
assembled the largest repository of expert-curated Boolean GRN models. A
meta-analysis of this diverse set of models reveals several design principles.
GRNs exhibit more canalization, redundancy and stable dynamics than expected.
Moreover, they are enriched for certain recurring network motifs. This raises
the important question why evolution favors these design mechanisms.
| [
{
"created": "Wed, 2 Sep 2020 17:48:57 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Sep 2023 20:13:23 GMT",
"version": "v2"
}
] | 2024-01-19 | [
[
"Kadelka",
"Claus",
""
],
[
"Butrie",
"Taras-Michael",
""
],
[
"Hilton",
"Evan",
""
],
[
"Kinseth",
"Jack",
""
],
[
"Schmidt",
"Addison",
""
],
[
"Serdarevic",
"Haris",
""
]
] | Gene regulatory networks (GRNs) play a central role in cellular decision-making. Understanding their structure and how it impacts their dynamics constitutes thus a fundamental biological question. GRNs are frequently modeled as Boolean networks, which are intuitive, simple to describe, and can yield qualitative results even when data is sparse. We assembled the largest repository of expert-curated Boolean GRN models. A meta-analysis of this diverse set of models reveals several design principles. GRNs exhibit more canalization, redundancy and stable dynamics than expected. Moreover, they are enriched for certain recurring network motifs. This raises the important question why evolution favors these design mechanisms. |
2012.09805 | Eugene Shakhnovich | Eugene Serebryany, Sourav Chowdhury, Nicki E. Watson, Arthur
McClelland, and Eugene I. Shakhnovich | A native chemical chaperone in the human eye lens | null | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by-sa/4.0/ | Cataract is one of the most prevalent protein aggregation disorders and still
the biggest cause of vision loss worldwide. The human lens, in its core region,
lacks turnover of any cells or cellular components; it has therefore evolved
remarkable mechanisms for resisting protein aggregation for a lifetime. We now
report that one such mechanism relies on an unusually abundant metabolite,
myo-inositol, to suppress light-scattering aggregation of lens proteins. We
quantified aggregation suppression by in vitro turbidimetry and characterized
both macroscopic and microscopic mechanisms of myo-inositol action using
negative-stain electron microscopy, differential scanning fluorometry, and a
thermal scanning Raman spectroscopy apparatus. Given recent metabolomic
evidence that it is dramatically depleted in human cataractous lenses compared
to age-matched controls, we suggest that maintaining or restoring healthy
levels of myo-inositol in the lens may be a simple, safe, and widely available
strategy for reducing the global burden of cataract.
| [
{
"created": "Thu, 17 Dec 2020 18:14:55 GMT",
"version": "v1"
}
] | 2020-12-18 | [
[
"Serebryany",
"Eugene",
""
],
[
"Chowdhury",
"Sourav",
""
],
[
"Watson",
"Nicki E.",
""
],
[
"McClelland",
"Arthur",
""
],
[
"Shakhnovich",
"Eugene I.",
""
]
] | Cataract is one of the most prevalent protein aggregation disorders and still the biggest cause of vision loss worldwide. The human lens, in its core region, lacks turnover of any cells or cellular components; it has therefore evolved remarkable mechanisms for resisting protein aggregation for a lifetime. We now report that one such mechanism relies on an unusually abundant metabolite, myo-inositol, to suppress light-scattering aggregation of lens proteins. We quantified aggregation suppression by in vitro turbidimetry and characterized both macroscopic and microscopic mechanisms of myo-inositol action using negative-stain electron microscopy, differential scanning fluorometry, and a thermal scanning Raman spectroscopy apparatus. Given recent metabolomic evidence that it is dramatically depleted in human cataractous lenses compared to age-matched controls, we suggest that maintaining or restoring healthy levels of myo-inositol in the lens may be a simple, safe, and widely available strategy for reducing the global burden of cataract. |
1402.3550 | Pietro Cicuta | Matthew A. A. Grant and Bart{\l}omiej Wac{\l}aw and Rosalind J. Allen
and Pietro Cicuta | The role of mechanical forces in the planar-to-bulk transition in
growing Escherichia coli microcolonies | null | null | null | null | q-bio.CB cond-mat.soft | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mechanical forces are obviously important in the assembly of
three-dimensional multicellular structures, but their detailed role is often
unclear. We have used growing microcolonies of the bacterium \emph{Escherichia
coli} to investigate the role of mechanical forces in the transition from
two-dimensional growth (on the interface between a hard surface and a soft
agarose pad) to three-dimensional growth (invasion of the agarose). We measure
the position within the colony where the invasion transition happens, the cell
density within the colony, and the colony size at the transition as functions
of the concentration of the agarose. We use a phenomenological theory, combined
with individual-based computer simulations, to show how mechanical forces
acting between the bacterial cells, and between the bacteria and the
surrounding matrix, lead to the complex phenomena observed in our experiments -
in particular a non-trivial dependence of the colony size at the transition on
the agarose concentration. Matching these approaches leads to a prediction for
how the friction coefficient between the bacteria and the agarose should vary
with agarose concentration. Our experimental conditions mimic numerous clinical
and environmental scenarios in which bacteria invade soft matrices, as well as
shedding more general light on the transition between two- and
three-dimensional growth in multicellular assemblies.
| [
{
"created": "Fri, 14 Feb 2014 18:59:18 GMT",
"version": "v1"
}
] | 2014-02-17 | [
[
"Grant",
"Matthew A. A.",
""
],
[
"Wacław",
"Bartłomiej",
""
],
[
"Allen",
"Rosalind J.",
""
],
[
"Cicuta",
"Pietro",
""
]
] | Mechanical forces are obviously important in the assembly of three-dimensional multicellular structures, but their detailed role is often unclear. We have used growing microcolonies of the bacterium \emph{Escherichia coli} to investigate the role of mechanical forces in the transition from two-dimensional growth (on the interface between a hard surface and a soft agarose pad) to three-dimensional growth (invasion of the agarose). We measure the position within the colony where the invasion transition happens, the cell density within the colony, and the colony size at the transition as functions of the concentration of the agarose. We use a phenomenological theory, combined with individual-based computer simulations, to show how mechanical forces acting between the bacterial cells, and between the bacteria and the surrounding matrix, lead to the complex phenomena observed in our experiments - in particular a non-trivial dependence of the colony size at the transition on the agarose concentration. Matching these approaches leads to a prediction for how the friction coefficient between the bacteria and the agarose should vary with agarose concentration. Our experimental conditions mimic numerous clinical and environmental scenarios in which bacteria invade soft matrices, as well as shedding more general light on the transition between two- and three-dimensional growth in multicellular assemblies. |
1303.0216 | Mathilde Paris | Mathilde Paris, Tommy Kaplan, Xiao Yong Li, Jacqueline E. Villalta,
Susan E. Lott, Michael B. Eisen | Extensive divergence of transcription factor binding in Drosophila
embryos with highly conserved gene expression | 7 figures, 20 supplementary figures, 6 supplementary tables Paris M,
Kaplan T, Li XY, Villalta JE, Lott SE, et al. (2013) Extensive Divergence of
Transcription Factor Binding in Drosophila Embryos with Highly Conserved Gene
Expression. PLoS Genet 9(9): e1003748. doi:10.1371/journal.pgen.1003748 | null | 10.1371/journal.pgen.1003748 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extensive divergence of transcription factor binding in Drosophila embryos
with highly conserved gene expression
| [
{
"created": "Fri, 1 Mar 2013 16:41:46 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Sep 2013 10:13:52 GMT",
"version": "v2"
}
] | 2013-09-23 | [
[
"Paris",
"Mathilde",
""
],
[
"Kaplan",
"Tommy",
""
],
[
"Li",
"Xiao Yong",
""
],
[
"Villalta",
"Jacqueline E.",
""
],
[
"Lott",
"Susan E.",
""
],
[
"Eisen",
"Michael B.",
""
]
] | Extensive divergence of transcription factor binding in Drosophila embryos with highly conserved gene expression |
1510.06479 | Yukiyasu Kamitani | Tomoyasu Horikawa and Yukiyasu Kamitani | Generic decoding of seen and imagined objects using hierarchical visual
features | null | null | null | null | q-bio.NC cs.CV | http://creativecommons.org/licenses/by/4.0/ | Object recognition is a key function in both human and machine vision. While
recent studies have achieved fMRI decoding of seen and imagined contents, the
prediction is limited to training examples. We present a decoding approach for
arbitrary objects, using the machine vision principle that an object category
is represented by a set of features rendered invariant through hierarchical
processing. We show that visual features including those from a convolutional
neural network can be predicted from fMRI patterns and that greater accuracy is
achieved for low/high-level features with lower/higher-level visual areas,
respectively. Predicted features are used to identify seen/imagined object
categories (extending beyond decoder training) from a set of computed features
for numerous object images. Furthermore, the decoding of imagined objects
reveals progressive recruitment of higher to lower visual representations. Our
results demonstrate a homology between human and machine vision and its utility
for brain-based information retrieval.
| [
{
"created": "Thu, 22 Oct 2015 02:34:03 GMT",
"version": "v1"
},
{
"created": "Fri, 23 Oct 2015 22:47:13 GMT",
"version": "v2"
},
{
"created": "Tue, 27 Sep 2016 14:27:20 GMT",
"version": "v3"
}
] | 2016-09-28 | [
[
"Horikawa",
"Tomoyasu",
""
],
[
"Kamitani",
"Yukiyasu",
""
]
] | Object recognition is a key function in both human and machine vision. While recent studies have achieved fMRI decoding of seen and imagined contents, the prediction is limited to training examples. We present a decoding approach for arbitrary objects, using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features including those from a convolutional neural network can be predicted from fMRI patterns and that greater accuracy is achieved for low/high-level features with lower/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, the decoding of imagined objects reveals progressive recruitment of higher to lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval. |
1612.04873 | Carlos Martinez Mr. | Carlos Alberto Mart\'inez, Kshitij Khare, Syed Rahman, Mauricio A.
Elzo | Introducing Gaussian covariance graph models in genome-wide prediction | 22 pages | null | null | null | q-bio.QM stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several statistical models used in genome-wide prediction assume independence
of marker allele substitution effects, but it is known that these effects might
be correlated. In statistics, graphical models have been identified as a useful
tool for covariance estimation in high dimensional problems and it is an area
that has recently experienced a great expansion. In Gaussian covariance graph
models (GCovGM), the joint distribution of a set of random variables is assumed
to be Gaussian and the pattern of zeros of the covariance matrix is encoded in
terms of an undirected graph G. In this study, methods adapting the theory of
GCovGM to genome-wide prediction were developed (Bayes GCov, Bayes GCov-KR and
Bayes GCov-H). In simulated and real datasets, improvements in correlation
between phenotypes and predicted breeding values and accuracies of predicted
breeding values were found. Our models account for correlation of marker
effects and permit to accommodate general structures as opposed to models
proposed in previous studies which consider spatial correlation only. In
addition, they allow incorporation of biological information in the prediction
process through its use when constructing graph G, and their extension to the
multiallelic loci case is straightforward.
| [
{
"created": "Wed, 14 Dec 2016 22:49:12 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Mar 2017 21:19:46 GMT",
"version": "v2"
},
{
"created": "Wed, 12 Apr 2017 14:43:45 GMT",
"version": "v3"
}
] | 2017-04-13 | [
[
"Martínez",
"Carlos Alberto",
""
],
[
"Khare",
"Kshitij",
""
],
[
"Rahman",
"Syed",
""
],
[
"Elzo",
"Mauricio A.",
""
]
] | Several statistical models used in genome-wide prediction assume independence of marker allele substitution effects, but it is known that these effects might be correlated. In statistics, graphical models have been identified as a useful tool for covariance estimation in high dimensional problems and it is an area that has recently experienced a great expansion. In Gaussian covariance graph models (GCovGM), the joint distribution of a set of random variables is assumed to be Gaussian and the pattern of zeros of the covariance matrix is encoded in terms of an undirected graph G. In this study, methods adapting the theory of GCovGM to genome-wide prediction were developed (Bayes GCov, Bayes GCov-KR and Bayes GCov-H). In simulated and real datasets, improvements in correlation between phenotypes and predicted breeding values and accuracies of predicted breeding values were found. Our models account for correlation of marker effects and permit to accommodate general structures as opposed to models proposed in previous studies which consider spatial correlation only. In addition, they allow incorporation of biological information in the prediction process through its use when constructing graph G, and their extension to the multiallelic loci case is straightforward. |
2305.01666 | Yangmin Huang | Jinlong Hu, Yangmin Huang, Nan Wang, Shoubin Dong | BrainNPT: Pre-training of Transformer networks for brain network
classification | Prepared to Submit | null | null | null | q-bio.NC cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning methods have advanced quickly in brain imaging analysis over
the past few years, but they are usually restricted by the limited labeled
data. Pre-trained model on unlabeled data has presented promising improvement
in feature learning in many domains, including natural language processing and
computer vision. However, this technique is under-explored in brain network
analysis. In this paper, we focused on pre-training methods with Transformer
networks to leverage existing unlabeled data for brain functional network
classification. First, we proposed a Transformer-based neural network, named as
BrainNPT, for brain functional network classification. The proposed method
leveraged <cls> token as a classification embedding vector for the Transformer
model to effectively capture the representation of brain network. Second, we
proposed a pre-training framework for BrainNPT model to leverage unlabeled
brain network data to learn the structure information of brain networks. The
results of classification experiments demonstrated the BrainNPT model without
pre-training achieved the best performance with the state-of-the-art models,
and the BrainNPT model with pre-training strongly outperformed the
state-of-the-art models. The pre-training BrainNPT model improved 8.75% of
accuracy compared with the model without pre-training. We further compared the
pre-training strategies, analyzed the influence of the parameters of the model,
and interpreted the trained model.
| [
{
"created": "Tue, 2 May 2023 13:01:59 GMT",
"version": "v1"
},
{
"created": "Thu, 4 May 2023 07:19:05 GMT",
"version": "v2"
},
{
"created": "Sun, 2 Jul 2023 01:54:19 GMT",
"version": "v3"
},
{
"created": "Wed, 2 Aug 2023 09:37:14 GMT",
"version": "v4"
}
] | 2023-08-03 | [
[
"Hu",
"Jinlong",
""
],
[
"Huang",
"Yangmin",
""
],
[
"Wang",
"Nan",
""
],
[
"Dong",
"Shoubin",
""
]
] | Deep learning methods have advanced quickly in brain imaging analysis over the past few years, but they are usually restricted by the limited labeled data. Pre-trained model on unlabeled data has presented promising improvement in feature learning in many domains, including natural language processing and computer vision. However, this technique is under-explored in brain network analysis. In this paper, we focused on pre-training methods with Transformer networks to leverage existing unlabeled data for brain functional network classification. First, we proposed a Transformer-based neural network, named as BrainNPT, for brain functional network classification. The proposed method leveraged <cls> token as a classification embedding vector for the Transformer model to effectively capture the representation of brain network. Second, we proposed a pre-training framework for BrainNPT model to leverage unlabeled brain network data to learn the structure information of brain networks. The results of classification experiments demonstrated the BrainNPT model without pre-training achieved the best performance with the state-of-the-art models, and the BrainNPT model with pre-training strongly outperformed the state-of-the-art models. The pre-training BrainNPT model improved 8.75% of accuracy compared with the model without pre-training. We further compared the pre-training strategies, analyzed the influence of the parameters of the model, and interpreted the trained model. |
2005.00750 | Slimane Ben Miled | Slimane Ben Miled and Amira Kebir | Simulations of the spread of COVID-19 and control policies in Tunisia | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop and analyze in this work an epidemiological model for COVID-19
using Tunisian data. Our aims are first to evaluate Tunisian control policies
for COVID-19 and secondly to understand the effect of different screening,
quarantine and containment strategies and the rule of the asymptomatic patients
on the spread of the virus in the Tunisian population. With this work, we show
that Tunisian control policies are efficient in screening infected and
asymptomatic individuals and that if containment and curfew are maintained the
epidemic will be quickly contained.
| [
{
"created": "Sat, 2 May 2020 08:51:37 GMT",
"version": "v1"
}
] | 2020-05-05 | [
[
"Miled",
"Slimane Ben",
""
],
[
"Kebir",
"Amira",
""
]
] | We develop and analyze in this work an epidemiological model for COVID-19 using Tunisian data. Our aims are first to evaluate Tunisian control policies for COVID-19 and secondly to understand the effect of different screening, quarantine and containment strategies and the rule of the asymptomatic patients on the spread of the virus in the Tunisian population. With this work, we show that Tunisian control policies are efficient in screening infected and asymptomatic individuals and that if containment and curfew are maintained the epidemic will be quickly contained. |
1312.6254 | Susmita Roy | Kushal Bagchi and Susmita Roy | Sensitivity of Water Dynamics to Biologically Significant Surfaces of
Monomeric Insulin: Role of Topology and Electrostatic Interactions | 34 pages, 10 figures | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In addition to the biologically active monomer of the protein Insulin
circulating in human blood, the molecule also exists in dimeric and hexameric
forms that are used as storage. The Insulin monomer contains two distinct
surfaces, namely the dimer forming surface (DFS) and the hexamer forming
surface (HFS) that are specifically designed to facilitate the formation of the
dimer and the hexamer, respectively. In order to characterize the structural
and dynamical behaviour of interfacial water molecules near these two surfaces
(DFS and HFS), we performed atomistic molecular dynamics simulations of Insulin
with explicit water. Dynamical characterization reveals that the structural
relaxation of the hydrogen bonds formed between the residues of DFS and the
interfacial water molecules is faster than those formed between water and that
of the HFS. Furthermore, the residence times of water molecules in the protein
hydration layer for both the DFS and HFS are found to be significantly higher
than those for some of the other proteins studied so far, such as HP-36 and
lysozyme. The surface topography and the arrangement of amino acid residues
work together to organize the water molecules in the hydration layer in order
to provide them with a preferred orientation. HFS having a large polar solvent
accessible surface area and a convex extensive nonpolar region, drives the
surrounding water molecules to acquire predominantly a clathrate-like
structure. In contrast, near the DFS, the surrounding water molecules acquire
an inverted orientation owing to the flat curvature of hydrophobic surface and
interrupted hydrophilic residual alignment. We have followed escape trajectory
of several such quasi-bound water molecules from both the surfaces and
constructed free energy surfaces of these water molecules.These free energy
surfaces reveal the differences between the two hydration layers.
| [
{
"created": "Sat, 21 Dec 2013 14:16:11 GMT",
"version": "v1"
}
] | 2013-12-24 | [
[
"Bagchi",
"Kushal",
""
],
[
"Roy",
"Susmita",
""
]
] | In addition to the biologically active monomer of the protein Insulin circulating in human blood, the molecule also exists in dimeric and hexameric forms that are used as storage. The Insulin monomer contains two distinct surfaces, namely the dimer forming surface (DFS) and the hexamer forming surface (HFS) that are specifically designed to facilitate the formation of the dimer and the hexamer, respectively. In order to characterize the structural and dynamical behaviour of interfacial water molecules near these two surfaces (DFS and HFS), we performed atomistic molecular dynamics simulations of Insulin with explicit water. Dynamical characterization reveals that the structural relaxation of the hydrogen bonds formed between the residues of DFS and the interfacial water molecules is faster than those formed between water and that of the HFS. Furthermore, the residence times of water molecules in the protein hydration layer for both the DFS and HFS are found to be significantly higher than those for some of the other proteins studied so far, such as HP-36 and lysozyme. The surface topography and the arrangement of amino acid residues work together to organize the water molecules in the hydration layer in order to provide them with a preferred orientation. HFS having a large polar solvent accessible surface area and a convex extensive nonpolar region, drives the surrounding water molecules to acquire predominantly a clathrate-like structure. In contrast, near the DFS, the surrounding water molecules acquire an inverted orientation owing to the flat curvature of hydrophobic surface and interrupted hydrophilic residual alignment. We have followed escape trajectory of several such quasi-bound water molecules from both the surfaces and constructed free energy surfaces of these water molecules.These free energy surfaces reveal the differences between the two hydration layers. |
q-bio/0508045 | Richard A. Blythe | G. Baxter, R. A. Blythe and A. J. McKane | Exact Solution of the Multi-Allelic Diffusion Model | 56 pages. 15 figures. Requires Elsevier document class | Mathematical Biosciences (2007) v209 pp124-70 | 10.1016/j.mbs.2007.01.001 | null | q-bio.PE cond-mat.stat-mech | null | We give an exact solution to the Kolmogorov equation describing genetic drift
for an arbitrary number of alleles at a given locus. This is achieved by
finding a change of variable which makes the equation separable, and therefore
reduces the problem with an arbitrary number of alleles to the solution of a
set of equations that are essentially no more complicated than that found in
the two-allele case. The same change of variable also renders the Kolmogorov
equation with the effect of mutations added separable, as long as the mutation
matrix has equal entries in each row. Thus this case can also be solved exactly
for an arbitrary number of alleles. The general solution, which is in the form
of a probability distribution, is in agreement with the previously known
results--which were for the cases of two and three alleles only. Results are
also given for a wide range of other quantities of interest, such as the
probabilities of extinction of various numbers of alleles, mean times to these
extinctions, and the means and variances of the allele frequencies. To aid
dissemination, these results are presented in two stages: first of all they are
given without derivations and too much mathematical detail, and then
subsequently derivations and a more technical discussion are provided.
| [
{
"created": "Tue, 30 Aug 2005 13:28:17 GMT",
"version": "v1"
}
] | 2015-05-26 | [
[
"Baxter",
"G.",
""
],
[
"Blythe",
"R. A.",
""
],
[
"McKane",
"A. J.",
""
]
] | We give an exact solution to the Kolmogorov equation describing genetic drift for an arbitrary number of alleles at a given locus. This is achieved by finding a change of variable which makes the equation separable, and therefore reduces the problem with an arbitrary number of alleles to the solution of a set of equations that are essentially no more complicated than that found in the two-allele case. The same change of variable also renders the Kolmogorov equation with the effect of mutations added separable, as long as the mutation matrix has equal entries in each row. Thus this case can also be solved exactly for an arbitrary number of alleles. The general solution, which is in the form of a probability distribution, is in agreement with the previously known results--which were for the cases of two and three alleles only. Results are also given for a wide range of other quantities of interest, such as the probabilities of extinction of various numbers of alleles, mean times to these extinctions, and the means and variances of the allele frequencies. To aid dissemination, these results are presented in two stages: first of all they are given without derivations and too much mathematical detail, and then subsequently derivations and a more technical discussion are provided. |
2001.08570 | Nathaniel Braman | Nathaniel Braman, Mohammed El Adoui, Manasa Vulchi, Paulette Turk,
Maryam Etesami, Pingfu Fu, Kaustav Bera, Stylianos Drisis, Vinay Varadan,
Donna Plecha, Mohammed Benjelloun, Jame Abraham, Anant Madabhushi | Deep learning-based prediction of response to HER2-targeted neoadjuvant
chemotherapy from pre-treatment dynamic breast MRI: A multi-institutional
validation study | Braman and El Adoui contributed equally to this work. 33 pages, 3
figures in main text | null | null | null | q-bio.QM cs.CV cs.LG eess.IV stat.AP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting response to neoadjuvant therapy is a vexing challenge in breast
cancer. In this study, we evaluate the ability of deep learning to predict
response to HER2-targeted neo-adjuvant chemotherapy (NAC) from pre-treatment
dynamic contrast-enhanced (DCE) MRI acquired prior to treatment. In a
retrospective study encompassing DCE-MRI data from a total of 157 HER2+ breast
cancer patients from 5 institutions, we developed and validated a deep learning
approach for predicting pathological complete response (pCR) to HER2-targeted
NAC prior to treatment. 100 patients who received HER2-targeted neoadjuvant
chemotherapy at a single institution were used to train (n=85) and tune (n=15)
a convolutional neural network (CNN) to predict pCR. A multi-input CNN
leveraging both pre-contrast and late post-contrast DCE-MRI acquisitions was
identified to achieve optimal response prediction within the validation set
(AUC=0.93). This model was then tested on two independent testing cohorts with
pre-treatment DCE-MRI data. It achieved strong performance in a 28 patient
testing set from a second institution (AUC=0.85, 95% CI 0.67-1.0, p=.0008) and
a 29 patient multicenter trial including data from 3 additional institutions
(AUC=0.77, 95% CI 0.58-0.97, p=0.006). Deep learning-based response prediction
model was found to exceed a multivariable model incorporating predictive
clinical variables (AUC < .65 in testing cohorts) and a model of
semi-quantitative DCE-MRI pharmacokinetic measurements (AUC < .60 in testing
cohorts). The results presented in this work across multiple sites suggest that
with further validation deep learning could provide an effective and reliable
tool to guide targeted therapy in breast cancer, thus reducing overtreatment
among HER2+ patients.
| [
{
"created": "Wed, 22 Jan 2020 17:54:24 GMT",
"version": "v1"
}
] | 2020-01-24 | [
[
"Braman",
"Nathaniel",
""
],
[
"Adoui",
"Mohammed El",
""
],
[
"Vulchi",
"Manasa",
""
],
[
"Turk",
"Paulette",
""
],
[
"Etesami",
"Maryam",
""
],
[
"Fu",
"Pingfu",
""
],
[
"Bera",
"Kaustav",
""
],
[
... | Predicting response to neoadjuvant therapy is a vexing challenge in breast cancer. In this study, we evaluate the ability of deep learning to predict response to HER2-targeted neo-adjuvant chemotherapy (NAC) from pre-treatment dynamic contrast-enhanced (DCE) MRI acquired prior to treatment. In a retrospective study encompassing DCE-MRI data from a total of 157 HER2+ breast cancer patients from 5 institutions, we developed and validated a deep learning approach for predicting pathological complete response (pCR) to HER2-targeted NAC prior to treatment. 100 patients who received HER2-targeted neoadjuvant chemotherapy at a single institution were used to train (n=85) and tune (n=15) a convolutional neural network (CNN) to predict pCR. A multi-input CNN leveraging both pre-contrast and late post-contrast DCE-MRI acquisitions was identified to achieve optimal response prediction within the validation set (AUC=0.93). This model was then tested on two independent testing cohorts with pre-treatment DCE-MRI data. It achieved strong performance in a 28 patient testing set from a second institution (AUC=0.85, 95% CI 0.67-1.0, p=.0008) and a 29 patient multicenter trial including data from 3 additional institutions (AUC=0.77, 95% CI 0.58-0.97, p=0.006). Deep learning-based response prediction model was found to exceed a multivariable model incorporating predictive clinical variables (AUC < .65 in testing cohorts) and a model of semi-quantitative DCE-MRI pharmacokinetic measurements (AUC < .60 in testing cohorts). The results presented in this work across multiple sites suggest that with further validation deep learning could provide an effective and reliable tool to guide targeted therapy in breast cancer, thus reducing overtreatment among HER2+ patients. |
1507.01497 | Neil Rabinowitz | Neil C. Rabinowitz and Robbe L. T. Goris and Johannes Ball\'e and Eero
P. Simoncelli | A model of sensory neural responses in the presence of unknown
modulatory inputs | 9 pages, 5 figures. minor changes since v1: added extra references,
connections to previous models, links to GLMs, complexity measures | null | null | null | q-bio.NC stat.ML | http://creativecommons.org/licenses/by/4.0/ | Neural responses are highly variable, and some portion of this variability
arises from fluctuations in modulatory factors that alter their gain, such as
adaptation, attention, arousal, expected or actual reward, emotion, and local
metabolic resource availability. Regardless of their origin, fluctuations in
these signals can confound or bias the inferences that one derives from spiking
responses. Recent work demonstrates that for sensory neurons, these effects can
be captured by a modulated Poisson model, whose rate is the product of a
stimulus-driven response function and an unknown modulatory signal. Here, we
extend this model, by incorporating explicit modulatory elements that are known
(specifically, spike-history dependence, as in previous models), and by
constraining the remaining latent modulatory signals to be smooth in time. We
develop inference procedures for fitting the entire model, including
hyperparameters, via evidence optimization, and apply these to simulated data,
and to responses of ferret auditory midbrain and cortical neurons to complex
sounds. We show that integrating out the latent modulators yields better (or
more readily-interpretable) receptive field estimates than a standard Poisson
model. Conversely, integrating out the stimulus dependence yields estimates of
the slowly-varying latent modulators.
| [
{
"created": "Mon, 6 Jul 2015 15:31:20 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Jul 2015 01:28:39 GMT",
"version": "v2"
}
] | 2015-07-08 | [
[
"Rabinowitz",
"Neil C.",
""
],
[
"Goris",
"Robbe L. T.",
""
],
[
"Ballé",
"Johannes",
""
],
[
"Simoncelli",
"Eero P.",
""
]
] | Neural responses are highly variable, and some portion of this variability arises from fluctuations in modulatory factors that alter their gain, such as adaptation, attention, arousal, expected or actual reward, emotion, and local metabolic resource availability. Regardless of their origin, fluctuations in these signals can confound or bias the inferences that one derives from spiking responses. Recent work demonstrates that for sensory neurons, these effects can be captured by a modulated Poisson model, whose rate is the product of a stimulus-driven response function and an unknown modulatory signal. Here, we extend this model, by incorporating explicit modulatory elements that are known (specifically, spike-history dependence, as in previous models), and by constraining the remaining latent modulatory signals to be smooth in time. We develop inference procedures for fitting the entire model, including hyperparameters, via evidence optimization, and apply these to simulated data, and to responses of ferret auditory midbrain and cortical neurons to complex sounds. We show that integrating out the latent modulators yields better (or more readily-interpretable) receptive field estimates than a standard Poisson model. Conversely, integrating out the stimulus dependence yields estimates of the slowly-varying latent modulators. |
q-bio/0508011 | Miodrag Krmar | Vladan Pankovic, Rade Glavatovic and Milan Predojevic | Time Reversal of the Increasing Geometrical Progression of the
Population of a Simple Biological Specie | 5 pages, no figures | null | null | PMF 02/08-05 | q-bio.PE | null | In this work we consider time reversal of the increasing geometrical
progression of the population of a simple biological species without any
enemies (predators) in the appropriate environment with unlimited resources
(food, territory, etc.). It is shown that such time reversal corresponds to
appearance of the cannibalism, i.e. self-predaciousness or self-damping
phenomena which can be described by a type of difference Verhulst equation.
| [
{
"created": "Thu, 11 Aug 2005 09:13:28 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Pankovic",
"Vladan",
""
],
[
"Glavatovic",
"Rade",
""
],
[
"Predojevic",
"Milan",
""
]
] | In this work we consider time reversal of the increasing geometrical progression of the population of a simple biological species without any enemies (predators) in the appropriate environment with unlimited resources (food, territory, etc.). It is shown that such time reversal corresponds to appearance of the cannibalism, i.e. self-predaciousness or self-damping phenomena which can be described by a type of difference Verhulst equation. |
1410.6780 | Peter D. Olmsted | Sophia Jordens, Emily E. Riley, Ivan Usov, Lucio Isa, Peter D.
Olmsted, Raffaele Mezzenga | Adsorption at Liquid Interfaces Induces Amyloid Fibril Bending and Ring
Formation | 31 pages, includes main text and supplementary information. Accepted
for publication in ACS Nano; replaced to fix small typos in equation number
cross referencing | ACS Nano 8 (2014) 11071-11079 | 10.1021/nn504249x | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein fibril accumulation at interfaces is an important step in many
physiological processes and neurodegenerative diseases as well as in designing
materials. Here we show, using $\beta$-lactoglobulin fibrils as a model, that
semiflexible fibrils exposed to a surface do not possess the Gaussian
distribution of curvatures characteristic for wormlike chains, but instead
exhibit a spontaneous curvature, which can even lead to ring-like
conformations. The long-lived presence of such rings is confirmed by atomic
force microscopy, cryogenic scanning electron microscopy and passive probe
particle tracking at air- and oil-water interfaces. We reason that this
spontaneous curvature is governed by structural characteristics on the
molecular level and is to be expected when a chiral and polar fibril is placed
in an inhomogeneous environment such as an interface. By testing
$\beta$-lactoglobulin fibrils with varying average thicknesses, we conclude
that fibril thickness plays a determining role in the propensity to form rings.
| [
{
"created": "Fri, 24 Oct 2014 18:57:25 GMT",
"version": "v1"
},
{
"created": "Wed, 29 Oct 2014 22:11:27 GMT",
"version": "v2"
}
] | 2015-05-20 | [
[
"Jordens",
"Sophia",
""
],
[
"Riley",
"Emily E.",
""
],
[
"Usov",
"Ivan",
""
],
[
"Isa",
"Lucio",
""
],
[
"Olmsted",
"Peter D.",
""
],
[
"Mezzenga",
"Raffaele",
""
]
] | Protein fibril accumulation at interfaces is an important step in many physiological processes and neurodegenerative diseases as well as in designing materials. Here we show, using $\beta$-lactoglobulin fibrils as a model, that semiflexible fibrils exposed to a surface do not possess the Gaussian distribution of curvatures characteristic for wormlike chains, but instead exhibit a spontaneous curvature, which can even lead to ring-like conformations. The long-lived presence of such rings is confirmed by atomic force microscopy, cryogenic scanning electron microscopy and passive probe particle tracking at air- and oil-water interfaces. We reason that this spontaneous curvature is governed by structural characteristics on the molecular level and is to be expected when a chiral and polar fibril is placed in an inhomogeneous environment such as an interface. By testing $\beta$-lactoglobulin fibrils with varying average thicknesses, we conclude that fibril thickness plays a determining role in the propensity to form rings. |
1206.0123 | Peter Csermely | Tamas Hegedus, Gergely Gyimesi, Merse E. Gaspar, Kristof Z. Szalay,
Rajeev Gangal and Peter Csermely | Potential application of network descriptions for understanding
conformational changes and protonation states of ABC transporters | 18 pages, 3 Figures and 241 references | Current Pharmaceutical Design, 2013, 19, 4155-4172 | 10.2174/1381612811319230002 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ABC (ATP Binding Cassette) transporter protein superfamily comprises a
large number of ubiquitous and functionally versatile proteins conserved from
archaea to humans. ABC transporters have a key role in many human diseases and
also in the development of multidrug resistance in cancer and in parasites.
Although a dramatic progress has been achieved in ABC protein studies in the
last decades, we are still far from a detailed understanding of their molecular
functions. Several aspects of pharmacological ABC transporter targeting also
remain unclear. Here we summarize the conformational and protonation changes of
ABC transporters and the potential use of this information in pharmacological
design. Network related methods, which recently became useful tools to describe
protein structure and dynamics, have not been applied to study allosteric
coupling in ABC proteins as yet. A detailed description of the strengths and
limitations of these methods is given, and their potential use in describing
ABC transporter dynamics is outlined. Finally, we highlight possible future
aspects of pharmacological utilization of network methods and outline the
future trends of this exciting field.
| [
{
"created": "Fri, 1 Jun 2012 08:31:35 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Sep 2013 05:50:00 GMT",
"version": "v2"
}
] | 2013-09-18 | [
[
"Hegedus",
"Tamas",
""
],
[
"Gyimesi",
"Gergely",
""
],
[
"Gaspar",
"Merse E.",
""
],
[
"Szalay",
"Kristof Z.",
""
],
[
"Gangal",
"Rajeev",
""
],
[
"Csermely",
"Peter",
""
]
] | The ABC (ATP Binding Cassette) transporter protein superfamily comprises a large number of ubiquitous and functionally versatile proteins conserved from archaea to humans. ABC transporters have a key role in many human diseases and also in the development of multidrug resistance in cancer and in parasites. Although a dramatic progress has been achieved in ABC protein studies in the last decades, we are still far from a detailed understanding of their molecular functions. Several aspects of pharmacological ABC transporter targeting also remain unclear. Here we summarize the conformational and protonation changes of ABC transporters and the potential use of this information in pharmacological design. Network related methods, which recently became useful tools to describe protein structure and dynamics, have not been applied to study allosteric coupling in ABC proteins as yet. A detailed description of the strengths and limitations of these methods is given, and their potential use in describing ABC transporter dynamics is outlined. Finally, we highlight possible future aspects of pharmacological utilization of network methods and outline the future trends of this exciting field. |
2201.02340 | Shi Gu | Shikuang Deng, Jingwei Li, B.T. Thomas Yeo, Shi Gu | Control Theory Illustrates the Energy Efficiency in the Dynamic
Reconfiguration of Functional Connectivity | null | null | null | null | q-bio.NC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The brain's functional connectivity fluctuates over time instead of remaining
steady in a stationary mode even during the resting state. This fluctuation
establishes the dynamical functional connectivity that transitions in a
non-random order between multiple modes. Yet it remains unexplored how the
transition facilitates the entire brain network as a dynamical system and what
utility this mechanism for dynamic reconfiguration can bring over the widely
used graph theoretical measurements. To address these questions, we propose to
conduct an energetic analysis of functional brain networks using resting-state
fMRI and behavioral measurements from the Human Connectome Project. Through
comparing the state transition energy under distinct adjacent matrices, we
justify that dynamic functional connectivity leads to 60% less energy cost to
support the resting state dynamics than static connectivity when driving the
transition through default mode network. Moreover, we demonstrate that
combining graph theoretical measurements and our energy-based control
measurements as the feature vector can provide complementary prediction power
for the behavioral scores. Our approach integrates statistical inference and
dynamical system inspection towards understanding brain networks.
| [
{
"created": "Fri, 7 Jan 2022 06:30:52 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Mar 2022 08:09:12 GMT",
"version": "v2"
}
] | 2022-03-28 | [
[
"Deng",
"Shikuang",
""
],
[
"Li",
"Jingwei",
""
],
[
"Yeo",
"B. T. Thomas",
""
],
[
"Gu",
"Shi",
""
]
] | The brain's functional connectivity fluctuates over time instead of remaining steady in a stationary mode even during the resting state. This fluctuation establishes the dynamical functional connectivity that transitions in a non-random order between multiple modes. Yet it remains unexplored how the transition facilitates the entire brain network as a dynamical system and what utility this mechanism for dynamic reconfiguration can bring over the widely used graph theoretical measurements. To address these questions, we propose to conduct an energetic analysis of functional brain networks using resting-state fMRI and behavioral measurements from the Human Connectome Project. Through comparing the state transition energy under distinct adjacent matrices, we justify that dynamic functional connectivity leads to 60% less energy cost to support the resting state dynamics than static connectivity when driving the transition through default mode network. Moreover, we demonstrate that combining graph theoretical measurements and our energy-based control measurements as the feature vector can provide complementary prediction power for the behavioral scores. Our approach integrates statistical inference and dynamical system inspection towards understanding brain networks. |
1407.4277 | Naoki Masuda Dr. | Hiroyuki Shimoji, Masato S. Abe, Kazuki Tsuji, Naoki Masuda | Global network structure of dominance hierarchy of ant workers | 5 figures, 2 tables, 4 supplementary figures, 2 supplementary tables | Journal of the Royal Society Interface, 11, 20140599 (2014) | null | null | q-bio.PE cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dominance hierarchy among animals is widespread in various species and
believed to serve to regulate resource allocation within an animal group.
Unlike small groups, however, detection and quantification of linear hierarchy
in large groups of animals are a difficult task. Here, we analyse
aggression-based dominance hierarchies formed by worker ants in Diacamma sp. as
large directed networks. We show that the observed dominance networks are
perfect or approximate directed acyclic graphs, which are consistent with
perfect linear hierarchy. The observed networks are also sparse and random but
significantly different from networks generated through thinning of the perfect
linear tournament (i.e., all individuals are linearly ranked and dominance
relationship exists between every pair of individuals). These results pertain
to global structure of the networks, which contrasts with the previous studies
inspecting frequencies of different types of triads. In addition, the
distribution of the out-degree (i.e., number of workers that the focal worker
attacks), not in-degree (i.e., number of workers that attack the focal worker),
of each observed network is right-skewed. Those having excessively large
out-degrees are located near the top, but not the top, of the hierarchy. We
also discuss evolutionary implications of the discovered properties of
dominance networks.
| [
{
"created": "Wed, 16 Jul 2014 12:25:51 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Aug 2014 04:03:37 GMT",
"version": "v2"
}
] | 2014-08-25 | [
[
"Shimoji",
"Hiroyuki",
""
],
[
"Abe",
"Masato S.",
""
],
[
"Tsuji",
"Kazuki",
""
],
[
"Masuda",
"Naoki",
""
]
] | Dominance hierarchy among animals is widespread in various species and believed to serve to regulate resource allocation within an animal group. Unlike small groups, however, detection and quantification of linear hierarchy in large groups of animals are a difficult task. Here, we analyse aggression-based dominance hierarchies formed by worker ants in Diacamma sp. as large directed networks. We show that the observed dominance networks are perfect or approximate directed acyclic graphs, which are consistent with perfect linear hierarchy. The observed networks are also sparse and random but significantly different from networks generated through thinning of the perfect linear tournament (i.e., all individuals are linearly ranked and dominance relationship exists between every pair of individuals). These results pertain to global structure of the networks, which contrasts with the previous studies inspecting frequencies of different types of triads. In addition, the distribution of the out-degree (i.e., number of workers that the focal worker attacks), not in-degree (i.e., number of workers that attack the focal worker), of each observed network is right-skewed. Those having excessively large out-degrees are located near the top, but not the top, of the hierarchy. We also discuss evolutionary implications of the discovered properties of dominance networks. |
2102.01512 | Takeshi Ishida | Takeshi Ishida | Mimicry mechanism model of octopus epidermis pattern by inverse
operation of Turing reaction model | null | PLOS ONE. August 11, 2021 | 10.1371/journal.pone.0256025 | null | q-bio.QM cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many cephalopods such as octopus and squid change their skin color
purposefully within a very short time. Furthermore, it is widely known that
some octopuses have the ability to change the color and unevenness of the skin
and to mimic the surroundings in short time. However, much research has not
been done on the entire mimicry mechanism in which the octopus recognizes the
surrounding landscape and changes the skin pattern. It seems that there is no
hypothetical model to explain the whole mimicry mechanism yet. In this study,
the mechanism of octopus skin pattern formation was assumed to be based on the
Turing model. Here, the pattern formation by the Turing model was realized by
the equivalent filter calculation model using the cellular automaton, instead
of directly solving the differential equations. It was shown that this model
can create various patterns with two feature parameters. Furthermore, for the
eyes recognition part where two features are extracted from the Turing pattern
image, our study proposed a method that can be calculated back with small
amount of calculation using the characteristics of the cellular Turing pattern
model. These two calculations can be expressed in the same mathematical frame
based on the cellular automaton model using the convolution filter. As a
result, it can be created a model which is capable of extracting features from
patterns and reconstructing patterns in a short time, the model is considered
to be a basic model for considering the mimicry mechanism of octopus. Also, in
terms of application to machine learning, it is considered that it shows the
possibility of leading to a model with a small amount of learning calculation.
| [
{
"created": "Fri, 15 Jan 2021 05:46:23 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Sep 2021 00:37:29 GMT",
"version": "v2"
}
] | 2021-09-03 | [
[
"Ishida",
"Takeshi",
""
]
] | Many cephalopods such as octopus and squid change their skin color purposefully within a very short time. Furthermore, it is widely known that some octopuses have the ability to change the color and unevenness of the skin and to mimic the surroundings in short time. However, much research has not been done on the entire mimicry mechanism in which the octopus recognizes the surrounding landscape and changes the skin pattern. It seems that there is no hypothetical model to explain the whole mimicry mechanism yet. In this study, the mechanism of octopus skin pattern formation was assumed to be based on the Turing model. Here, the pattern formation by the Turing model was realized by the equivalent filter calculation model using the cellular automaton, instead of directly solving the differential equations. It was shown that this model can create various patterns with two feature parameters. Furthermore, for the eyes recognition part where two features are extracted from the Turing pattern image, our study proposed a method that can be calculated back with small amount of calculation using the characteristics of the cellular Turing pattern model. These two calculations can be expressed in the same mathematical frame based on the cellular automaton model using the convolution filter. As a result, it can be created a model which is capable of extracting features from patterns and reconstructing patterns in a short time, the model is considered to be a basic model for considering the mimicry mechanism of octopus. Also, in terms of application to machine learning, it is considered that it shows the possibility of leading to a model with a small amount of learning calculation. |
2406.03456 | Alexander Dack | Alexander Dack, Benjamin Qureshi, Thomas E. Ouldridge, Tomislav Plesa | Recurrent neural chemical reaction networks that approximate arbitrary
dynamics | null | null | null | null | q-bio.MN math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many important phenomena in chemistry and biology are realized via dynamical
features such as multi-stability, oscillations, and chaos. Construction of
novel chemical systems with such finely-tuned dynamics is a challenging problem
central to the growing field of synthetic biology. In this paper, we address
this problem by putting forward a molecular version of a recurrent artificial
neural network, which we call a recurrent neural chemical reaction network
(RNCRN). We prove that the RNCRN, with sufficiently many auxiliary chemical
species and suitable fast reactions, can be systematically trained to achieve
any dynamics. This approximation ability is shown to hold independent of the
initial conditions for the auxiliary species, making the RNCRN more
experimentally feasible. To demonstrate the results, we present a number of
relatively simple RNCRNs trained to display a variety of biologically-important
dynamical features.
| [
{
"created": "Wed, 5 Jun 2024 17:00:16 GMT",
"version": "v1"
}
] | 2024-06-06 | [
[
"Dack",
"Alexander",
""
],
[
"Qureshi",
"Benjamin",
""
],
[
"Ouldridge",
"Thomas E.",
""
],
[
"Plesa",
"Tomislav",
""
]
] | Many important phenomena in chemistry and biology are realized via dynamical features such as multi-stability, oscillations, and chaos. Construction of novel chemical systems with such finely-tuned dynamics is a challenging problem central to the growing field of synthetic biology. In this paper, we address this problem by putting forward a molecular version of a recurrent artificial neural network, which we call a recurrent neural chemical reaction network (RNCRN). We prove that the RNCRN, with sufficiently many auxiliary chemical species and suitable fast reactions, can be systematically trained to achieve any dynamics. This approximation ability is shown to hold independent of the initial conditions for the auxiliary species, making the RNCRN more experimentally feasible. To demonstrate the results, we present a number of relatively simple RNCRNs trained to display a variety of biologically-important dynamical features. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.