id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2405.06729 | Aleix Lafita | Aleix Lafita, Ferran Gonzalez, Mahmoud Hossam, Paul Smyth, Jacob
Deasy, Ari Allyn-Feuer, Daniel Seaton, Stephen Young | Fine-tuning Protein Language Models with Deep Mutational Scanning
improves Variant Effect Prediction | Machine Learning for Genomics Explorations workshop at ICLR 2024 | null | null | null | q-bio.GN cs.LG | http://creativecommons.org/licenses/by/4.0/ | Protein Language Models (PLMs) have emerged as performant and scalable tools
for predicting the functional impact and clinical significance of
protein-coding variants, but they still lag experimental accuracy. Here, we
present a novel fine-tuning approach to improve the performance of PLMs with
experimental maps of variant effects from Deep Mutational Scanning (DMS) assays
using a Normalised Log-odds Ratio (NLR) head. We find consistent improvements
in a held-out protein test set, and on independent DMS and clinical variant
annotation benchmarks from ProteinGym and ClinVar. These findings demonstrate
that DMS is a promising source of sequence diversity and supervised training
data for improving the performance of PLMs for variant effect prediction.
| [
{
"created": "Fri, 10 May 2024 14:50:40 GMT",
"version": "v1"
}
] | 2024-05-14 | [
[
"Lafita",
"Aleix",
""
],
[
"Gonzalez",
"Ferran",
""
],
[
"Hossam",
"Mahmoud",
""
],
[
"Smyth",
"Paul",
""
],
[
"Deasy",
"Jacob",
""
],
[
"Allyn-Feuer",
"Ari",
""
],
[
"Seaton",
"Daniel",
""
],
[
"Young",
"Stephen",
""
]
] | Protein Language Models (PLMs) have emerged as performant and scalable tools for predicting the functional impact and clinical significance of protein-coding variants, but they still lag experimental accuracy. Here, we present a novel fine-tuning approach to improve the performance of PLMs with experimental maps of variant effects from Deep Mutational Scanning (DMS) assays using a Normalised Log-odds Ratio (NLR) head. We find consistent improvements in a held-out protein test set, and on independent DMS and clinical variant annotation benchmarks from ProteinGym and ClinVar. These findings demonstrate that DMS is a promising source of sequence diversity and supervised training data for improving the performance of PLMs for variant effect prediction. |
2111.06920 | Johannes Friedrich | Johannes Friedrich, Siavash Golkar, Shiva Farashahi, Alexander Genkin,
Anirvan M. Sengupta, Dmitri B. Chklovskii | Neural optimal feedback control with local learning rules | Manuscript and supplementary material of NeurIPS 2021 paper | null | null | null | q-bio.NC cs.NE cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A major problem in motor control is understanding how the brain plans and
executes proper movements in the face of delayed and noisy stimuli. A prominent
framework for addressing such control problems is Optimal Feedback Control
(OFC). OFC generates control actions that optimize behaviorally relevant
criteria by integrating noisy sensory stimuli and the predictions of an
internal model using the Kalman filter or its extensions. However, a
satisfactory neural model of Kalman filtering and control is lacking because
existing proposals have the following limitations: not considering the delay of
sensory feedback, training in alternating phases, and requiring knowledge of
the noise covariance matrices, as well as that of systems dynamics. Moreover,
the majority of these studies considered Kalman filtering in isolation, and not
jointly with control. To address these shortcomings, we introduce a novel
online algorithm which combines adaptive Kalman filtering with a model free
control approach (i.e., policy gradient algorithm). We implement this algorithm
in a biologically plausible neural network with local synaptic plasticity
rules. This network performs system identification and Kalman filtering,
without the need for multiple phases with distinct update rules or the
knowledge of the noise covariances. It can perform state estimation with
delayed sensory feedback, with the help of an internal model. It learns the
control policy without requiring any knowledge of the dynamics, thus avoiding
the need for weight transport. In this way, our implementation of OFC solves
the credit assignment problem needed to produce the appropriate sensory-motor
control in the presence of stimulus delay.
| [
{
"created": "Fri, 12 Nov 2021 20:02:00 GMT",
"version": "v1"
}
] | 2021-11-16 | [
[
"Friedrich",
"Johannes",
""
],
[
"Golkar",
"Siavash",
""
],
[
"Farashahi",
"Shiva",
""
],
[
"Genkin",
"Alexander",
""
],
[
"Sengupta",
"Anirvan M.",
""
],
[
"Chklovskii",
"Dmitri B.",
""
]
] | A major problem in motor control is understanding how the brain plans and executes proper movements in the face of delayed and noisy stimuli. A prominent framework for addressing such control problems is Optimal Feedback Control (OFC). OFC generates control actions that optimize behaviorally relevant criteria by integrating noisy sensory stimuli and the predictions of an internal model using the Kalman filter or its extensions. However, a satisfactory neural model of Kalman filtering and control is lacking because existing proposals have the following limitations: not considering the delay of sensory feedback, training in alternating phases, and requiring knowledge of the noise covariance matrices, as well as that of systems dynamics. Moreover, the majority of these studies considered Kalman filtering in isolation, and not jointly with control. To address these shortcomings, we introduce a novel online algorithm which combines adaptive Kalman filtering with a model free control approach (i.e., policy gradient algorithm). We implement this algorithm in a biologically plausible neural network with local synaptic plasticity rules. This network performs system identification and Kalman filtering, without the need for multiple phases with distinct update rules or the knowledge of the noise covariances. It can perform state estimation with delayed sensory feedback, with the help of an internal model. It learns the control policy without requiring any knowledge of the dynamics, thus avoiding the need for weight transport. In this way, our implementation of OFC solves the credit assignment problem needed to produce the appropriate sensory-motor control in the presence of stimulus delay. |
0908.3170 | Ferm\'in Moscoso del Prado PhD | Ferm\'in Moscoso del Prado Mart\'in | The thermodynamics of human reaction times | Submitted manuscript | null | null | null | q-bio.NC cond-mat.dis-nn cond-mat.stat-mech cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | I present a new approach for the interpretation of reaction time (RT) data
from behavioral experiments. From a physical perspective, the entropy of the RT
distribution provides a model-free estimate of the amount of processing
performed by the cognitive system. In this way, the focus is shifted from the
conventional interpretation of individual RTs being either long or short, into
their distribution being more or less complex in terms of entropy. The new
approach enables the estimation of the cognitive processing load without
reference to the informational content of the stimuli themselves, thus
providing a more appropriate estimate of the cognitive impact of different
sources of information that are carried by experimental stimuli or tasks. The
paper introduces the formulation of the theory, followed by an empirical
validation using a database of human RTs in lexical tasks (visual lexical
decision and word naming). The results show that this new interpretation of RTs
is more powerful than the traditional one. The method provides theoretical
estimates of the processing loads elicited by individual stimuli. These loads
sharply distinguish the responses from different tasks. In addition, it
provides upper-bound estimates for the speed at which the system processes
information. Finally, I argue that the theoretical proposal, and the associated
empirical evidence, provide strong arguments for an adaptive system that
systematically adjusts its operational processing speed to the particular
demands of each stimulus. This finding is in contradiction with Hick's law,
which posits a relatively constant processing speed within an experimental
context.
| [
{
"created": "Fri, 21 Aug 2009 17:00:12 GMT",
"version": "v1"
}
] | 2009-08-24 | [
[
"Martín",
"Fermín Moscoso del Prado",
""
]
] | I present a new approach for the interpretation of reaction time (RT) data from behavioral experiments. From a physical perspective, the entropy of the RT distribution provides a model-free estimate of the amount of processing performed by the cognitive system. In this way, the focus is shifted from the conventional interpretation of individual RTs being either long or short, into their distribution being more or less complex in terms of entropy. The new approach enables the estimation of the cognitive processing load without reference to the informational content of the stimuli themselves, thus providing a more appropriate estimate of the cognitive impact of different sources of information that are carried by experimental stimuli or tasks. The paper introduces the formulation of the theory, followed by an empirical validation using a database of human RTs in lexical tasks (visual lexical decision and word naming). The results show that this new interpretation of RTs is more powerful than the traditional one. The method provides theoretical estimates of the processing loads elicited by individual stimuli. These loads sharply distinguish the responses from different tasks. In addition, it provides upper-bound estimates for the speed at which the system processes information. Finally, I argue that the theoretical proposal, and the associated empirical evidence, provide strong arguments for an adaptive system that systematically adjusts its operational processing speed to the particular demands of each stimulus. This finding is in contradiction with Hick's law, which posits a relatively constant processing speed within an experimental context. |
2311.09312 | Gian Marco Visani | Gian Marco Visani, William Galvin, Michael Neal Pun, Armita
Nourmohammad | H-Packer: Holographic Rotationally Equivariant Convolutional Neural
Network for Protein Side-Chain Packing | Accepted as a conference paper at MLCB 2023. 8 pages main body, 20
pages with appendix. 10 figures | null | null | null | q-bio.BM cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Accurately modeling protein 3D structure is essential for the design of
functional proteins. An important sub-task of structure modeling is protein
side-chain packing: predicting the conformation of side-chains (rotamers) given
the protein's backbone structure and amino-acid sequence. Conventional
approaches for this task rely on expensive sampling procedures over
hand-crafted energy functions and rotamer libraries. Recently, several deep
learning methods have been developed to tackle the problem in a data-driven
way, albeit with vastly different formulations (from image-to-image translation
to directly predicting atomic coordinates). Here, we frame the problem as a
joint regression over the side-chains' true degrees of freedom: the dihedral
$\chi$ angles. We carefully study possible objective functions for this task,
while accounting for the underlying symmetries of the task. We propose
Holographic Packer (H-Packer), a novel two-stage algorithm for side-chain
packing built on top of two light-weight rotationally equivariant neural
networks. We evaluate our method on CASP13 and CASP14 targets. H-Packer is
computationally efficient and shows favorable performance against conventional
physics-based algorithms and is competitive against alternative deep learning
solutions.
| [
{
"created": "Wed, 15 Nov 2023 19:12:47 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Nov 2023 18:31:07 GMT",
"version": "v2"
}
] | 2023-11-29 | [
[
"Visani",
"Gian Marco",
""
],
[
"Galvin",
"William",
""
],
[
"Pun",
"Michael Neal",
""
],
[
"Nourmohammad",
"Armita",
""
]
] | Accurately modeling protein 3D structure is essential for the design of functional proteins. An important sub-task of structure modeling is protein side-chain packing: predicting the conformation of side-chains (rotamers) given the protein's backbone structure and amino-acid sequence. Conventional approaches for this task rely on expensive sampling procedures over hand-crafted energy functions and rotamer libraries. Recently, several deep learning methods have been developed to tackle the problem in a data-driven way, albeit with vastly different formulations (from image-to-image translation to directly predicting atomic coordinates). Here, we frame the problem as a joint regression over the side-chains' true degrees of freedom: the dihedral $\chi$ angles. We carefully study possible objective functions for this task, while accounting for the underlying symmetries of the task. We propose Holographic Packer (H-Packer), a novel two-stage algorithm for side-chain packing built on top of two light-weight rotationally equivariant neural networks. We evaluate our method on CASP13 and CASP14 targets. H-Packer is computationally efficient and shows favorable performance against conventional physics-based algorithms and is competitive against alternative deep learning solutions. |
0808.3853 | Alain Destexhe | Claude Bedard and Alain Destexhe | Macroscopic models of local field potentials and the apparent 1/f noise
in brain activity | null | Biophysical Journal 96: 2589-2603, 2009. | 10.1016/j.bpj.2008.12.3951 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The power spectrum of local field potentials (LFPs) has been reported to
scale as the inverse of the frequency, but the origin of this "1/f noise" is at
present unclear. Macroscopic measurements in cortical tissue demonstrated that
electric conductivity (as well as permittivity) is frequency dependent, while
other measurements failed to evidence any dependence on frequency. In the
present paper, we propose a model of the genesis of LFPs which accounts for the
above data and contradictions. Starting from first principles (Maxwell
equations), we introduce a macroscopic formalism in which macroscopic
measurements are naturally incorporated, and also examine different physical
causes for the frequency dependence. We suggest that ionic diffusion primes
over electric field effects, and is responsible for the frequency dependence.
This explains the contradictory observations, and also reproduces the 1/f power
spectral structure of LFPs, as well as more complex frequency scaling. Finally,
we suggest a measurement method to reveal the frequency dependence of current
propagation in biological tissue, and which could be used to directly test the
predictions of the present formalism.
| [
{
"created": "Thu, 28 Aug 2008 09:12:40 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Dec 2008 09:53:56 GMT",
"version": "v2"
}
] | 2009-11-13 | [
[
"Bedard",
"Claude",
""
],
[
"Destexhe",
"Alain",
""
]
] | The power spectrum of local field potentials (LFPs) has been reported to scale as the inverse of the frequency, but the origin of this "1/f noise" is at present unclear. Macroscopic measurements in cortical tissue demonstrated that electric conductivity (as well as permittivity) is frequency dependent, while other measurements failed to evidence any dependence on frequency. In the present paper, we propose a model of the genesis of LFPs which accounts for the above data and contradictions. Starting from first principles (Maxwell equations), we introduce a macroscopic formalism in which macroscopic measurements are naturally incorporated, and also examine different physical causes for the frequency dependence. We suggest that ionic diffusion primes over electric field effects, and is responsible for the frequency dependence. This explains the contradictory observations, and also reproduces the 1/f power spectral structure of LFPs, as well as more complex frequency scaling. Finally, we suggest a measurement method to reveal the frequency dependence of current propagation in biological tissue, and which could be used to directly test the predictions of the present formalism. |
2111.04293 | Shubhadeep Sadhukhan | Shubhadeep Sadhukhan, Ashutosh Shukla, Sagar Chakraborty | Subduing always defecting mutants by multiplayer reactive strategies:
Non-reciprocity versus generosity | 10 pages, 5 figures | null | null | null | q-bio.PE nlin.AO physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A completely non-generous and reciprocal population of players can create a
robust cooperating state that cannot be invaded by always defecting free riders
if the interactions among players are repeated for long enough. However, strict
non-generosity and strict reciprocity are ideal concepts, and may not even be
desirable sometimes. Therefore, to what extent generosity or non-reciprocity
can be allowed while still not be swamped by the mutants, is a natural
question. In this paper, we not only ask this question but furthermore ask how
generosity comparatively fares against non-reciprocity in this context. For
mathematical concreteness, we work within the framework of multiplayer repeated
prisoner's dilemma game with reactive strategies in a finite and an infinite
population; and explore the aforementioned questions through the effects of the
benefit to cost ratio, the interaction group size, and the population size.
| [
{
"created": "Mon, 8 Nov 2021 06:34:36 GMT",
"version": "v1"
}
] | 2021-11-09 | [
[
"Sadhukhan",
"Shubhadeep",
""
],
[
"Shukla",
"Ashutosh",
""
],
[
"Chakraborty",
"Sagar",
""
]
] | A completely non-generous and reciprocal population of players can create a robust cooperating state that cannot be invaded by always defecting free riders if the interactions among players are repeated for long enough. However, strict non-generosity and strict reciprocity are ideal concepts, and may not even be desirable sometimes. Therefore, to what extent generosity or non-reciprocity can be allowed while still not be swamped by the mutants, is a natural question. In this paper, we not only ask this question but furthermore ask how generosity comparatively fares against non-reciprocity in this context. For mathematical concreteness, we work within the framework of multiplayer repeated prisoner's dilemma game with reactive strategies in a finite and an infinite population; and explore the aforementioned questions through the effects of the benefit to cost ratio, the interaction group size, and the population size. |
1505.05816 | Peter Ralph | Peter L. Ralph | An empirical approach to demographic inference with genomic data | null | null | null | null | q-bio.PE math.PR stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inference with population genetic data usually treats the population pedigree
as a nuisance parameter, the unobserved product of a past history of random
mating. However, the history of genetic relationships in a given population is
a fixed, unobserved object, and so an alternative approach is to treat this
network of relationships as a complex object we wish to learn about, by
observing how genomes have been noisily passed down through it. This paper
explores this point of view, showing how to translate questions about
population genetic data into calculations with a Poisson process of mutations
on all ancestral genomes. This method is applied to give a robust
interpretation to the $f_4$ statistic used to identify admixture, and to design
a new statistic that measures covariances in mean times to most recent common
ancestor between two pairs of sequences. The method more generally interprets
population genetic statistics in terms of sums of specific functions over
ancestral genomes, thereby providing concrete, broadly interpretable
interpretations for these statistics. This provides a method for describing
demographic history without simplified demographic models. More generally, it
brings into focus the population pedigree, which is averaged over in
model-based demographic inference.
| [
{
"created": "Thu, 21 May 2015 18:13:54 GMT",
"version": "v1"
},
{
"created": "Wed, 11 Jul 2018 23:12:10 GMT",
"version": "v2"
},
{
"created": "Mon, 1 Apr 2019 17:05:36 GMT",
"version": "v3"
}
] | 2019-04-02 | [
[
"Ralph",
"Peter L.",
""
]
] | Inference with population genetic data usually treats the population pedigree as a nuisance parameter, the unobserved product of a past history of random mating. However, the history of genetic relationships in a given population is a fixed, unobserved object, and so an alternative approach is to treat this network of relationships as a complex object we wish to learn about, by observing how genomes have been noisily passed down through it. This paper explores this point of view, showing how to translate questions about population genetic data into calculations with a Poisson process of mutations on all ancestral genomes. This method is applied to give a robust interpretation to the $f_4$ statistic used to identify admixture, and to design a new statistic that measures covariances in mean times to most recent common ancestor between two pairs of sequences. The method more generally interprets population genetic statistics in terms of sums of specific functions over ancestral genomes, thereby providing concrete, broadly interpretable interpretations for these statistics. This provides a method for describing demographic history without simplified demographic models. More generally, it brings into focus the population pedigree, which is averaged over in model-based demographic inference. |
2401.09857 | Agnieszka Pregowska | Zofia Rudnicka, Janusz Szczepanski, Agnieszka Pregowska | Artificial Intelligence-based algorithms in medical image scan
seg-mentation and intelligent visual-content generation -- a concise overview | null | https://www.mdpi.com/2079-9292/13/4/746 | 10.3390/electronics13040746 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, Artificial Intelligence (AI)-based algorithms have revolutionized
the medical image segmentation processes. Thus, the precise segmentation of
organs and their lesions may contribute to an efficient diagnostics process and
a more effective selection of targeted therapies as well as increasing the
effectiveness of the training process. In this context, AI may contribute to
the automatization of the image scan segmentation process and increase the
quality of the resulting 3D objects, which may lead to the generation of more
realistic virtual objects. In this paper, we focus on the AI-based solutions
applied in the medical image scan segmentation, and intelligent visual-content
generation, i.e. computer-generated three-dimensional (3D) images in the
context of Extended Reality (XR). We consider different types of neural
networks used with a special emphasis on the learning rules applied, taking
into account algorithm accuracy and performance, as well as open data
availability. This paper attempts to summarize the current development of
AI-based segmentation methods in medical imaging and intelligent visual content
generation that are applied in XR. It concludes also with possible developments
and open challenges in AI application in Extended Reality-based solutions.
Finally, the future lines of research and development directions of Artificial
Intelligence applications both in medical image segmentation and Extended
Reality-based medical solutions are discussed
| [
{
"created": "Thu, 18 Jan 2024 10:12:50 GMT",
"version": "v1"
}
] | 2024-03-21 | [
[
"Rudnicka",
"Zofia",
""
],
[
"Szczepanski",
"Janusz",
""
],
[
"Pregowska",
"Agnieszka",
""
]
] | Recently, Artificial Intelligence (AI)-based algorithms have revolutionized the medical image segmentation processes. Thus, the precise segmentation of organs and their lesions may contribute to an efficient diagnostics process and a more effective selection of targeted therapies as well as increasing the effectiveness of the training process. In this context, AI may contribute to the automatization of the image scan segmentation process and increase the quality of the resulting 3D objects, which may lead to the generation of more realistic virtual objects. In this paper, we focus on the AI-based solutions applied in the medical image scan segmentation, and intelligent visual-content generation, i.e. computer-generated three-dimensional (3D) images in the context of Extended Reality (XR). We consider different types of neural networks used with a special emphasis on the learning rules applied, taking into account algorithm accuracy and performance, as well as open data availability. This paper attempts to summarize the current development of AI-based segmentation methods in medical imaging and intelligent visual content generation that are applied in XR. It concludes also with possible developments and open challenges in AI application in Extended Reality-based solutions. Finally, the future lines of research and development directions of Artificial Intelligence applications both in medical image segmentation and Extended Reality-based medical solutions are discussed |
1601.04253 | Yuri A. Dabaghian | Kentaro Hoffman, Andrey Babichev and Yuri Dabaghian | Topological mapping of space in bat hippocampus | 14 pages, 4 figures, 3 supplementary figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mammalian hippocampus plays a key role in spatial learning and memory, but
the exact nature of the hippocampal representation of space is still being
explored. Recently, there has been a fair amount of success in modeling
hippocampal spatial maps in rats, assuming a topological perspective on spatial
information processing. In this paper, we use the topological model to study
$3D$ learning in bats, which produces several insights into neurophysiological
mechanisms of the hippocampal spatial mapping. First, we demonstrate functional
importance of the cell assemblies for producing accurate maps of the $3D$
environments. Second, the model suggests that the readout neurons in these cell
assemblies should function as integrators of synaptic inputs, rather than
detectors of place cells' coactivity and allows estimating the integration time
window. Lastly, the model suggests that, in contrast with relatively slow
moving rats, suppressing $\theta$-precession in bats improves the place cells
capacity to encode spatial maps, which is consistent with the experimental
observations.
| [
{
"created": "Sun, 17 Jan 2016 05:56:02 GMT",
"version": "v1"
}
] | 2016-01-19 | [
[
"Hoffman",
"Kentaro",
""
],
[
"Babichev",
"Andrey",
""
],
[
"Dabaghian",
"Yuri",
""
]
] | Mammalian hippocampus plays a key role in spatial learning and memory, but the exact nature of the hippocampal representation of space is still being explored. Recently, there has been a fair amount of success in modeling hippocampal spatial maps in rats, assuming a topological perspective on spatial information processing. In this paper, we use the topological model to study $3D$ learning in bats, which produces several insights into neurophysiological mechanisms of the hippocampal spatial mapping. First, we demonstrate functional importance of the cell assemblies for producing accurate maps of the $3D$ environments. Second, the model suggests that the readout neurons in these cell assemblies should function as integrators of synaptic inputs, rather than detectors of place cells' coactivity and allows estimating the integration time window. Lastly, the model suggests that, in contrast with relatively slow moving rats, suppressing $\theta$-precession in bats improves the place cells capacity to encode spatial maps, which is consistent with the experimental observations. |
1507.02812 | Jianxi Luo | Jianxi Luo | Loops and autonomy promote evolvability of ecosystem networks | null | Scientific Reports 4, 6440 (2014) | 10.1038/srep06440 | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The structure of ecological networks, in particular food webs, determines
their ability to evolve further, i.e. evolvability. The knowledge about how
food web evolvability is determined by the structures of diverse ecological
networks can guide human interventions purposefully to either promote or limit
evolvability of ecosystems. However, the focus of prior food web studies was on
stability and robustness; little is known regarding the impact of ecological
network structures on their evolvability. To correlate ecosystem structure and
evolvability, we adopt the NK model originally from evolutionary biology to
generate and assess the ruggedness of fitness landscapes of a wide spectrum of
model food webs with gradual variation in the amount of feeding loops and link
density. The variation in network structures is controlled by linkage rewiring.
Our results show that more feeding loops and lower trophic link density, i.e.
higher autonomy of species, of food webs increase the potential for the
ecosystem to generate heritable variations with improved fitness. Our findings
allow the prediction of the evolvability of actual food webs according to their
network structures, and provide guidance to enhancing or controlling the
evolvability of specific ecosystems.
| [
{
"created": "Fri, 10 Jul 2015 09:09:48 GMT",
"version": "v1"
}
] | 2015-07-13 | [
[
"Luo",
"Jianxi",
""
]
] | The structure of ecological networks, in particular food webs, determines their ability to evolve further, i.e. evolvability. The knowledge about how food web evolvability is determined by the structures of diverse ecological networks can guide human interventions purposefully to either promote or limit evolvability of ecosystems. However, the focus of prior food web studies was on stability and robustness; little is known regarding the impact of ecological network structures on their evolvability. To correlate ecosystem structure and evolvability, we adopt the NK model originally from evolutionary biology to generate and assess the ruggedness of fitness landscapes of a wide spectrum of model food webs with gradual variation in the amount of feeding loops and link density. The variation in network structures is controlled by linkage rewiring. Our results show that more feeding loops and lower trophic link density, i.e. higher autonomy of species, of food webs increase the potential for the ecosystem to generate heritable variations with improved fitness. Our findings allow the prediction of the evolvability of actual food webs according to their network structures, and provide guidance to enhancing or controlling the evolvability of specific ecosystems. |
2007.11911 | Manoj Kumar | Manoj Kumar, Syed Abbas | Age structured SIR model for the spread of infectious diseases through
indirect contacts | 14 pages, 3 figures | null | null | null | q-bio.PE math.DS physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article, we discuss an age-structured SIR model in which disease not
only spread through direct person to person contacts for e.g. infection due to
surface contamination but it can also spread through indirect contacts. It is
evident that age also plays a crucial role in SARS virus infection including
COVID-19 infection. We formulate our model as an abstract semilinear Cauchy
problem in an appropriate Banach space to show the existence of solution and
also show the existence of steady states. It is assumed in this work that the
population is in a demographic stationary state and show that there is no
disease-free equilibrium point as long as there is a transmission of infection
due to the indirect contacts in the environment.
| [
{
"created": "Thu, 23 Jul 2020 10:35:08 GMT",
"version": "v1"
}
] | 2020-07-24 | [
[
"Kumar",
"Manoj",
""
],
[
"Abbas",
"Syed",
""
]
] | In this article, we discuss an age-structured SIR model in which disease not only spread through direct person to person contacts for e.g. infection due to surface contamination but it can also spread through indirect contacts. It is evident that age also plays a crucial role in SARS virus infection including COVID-19 infection. We formulate our model as an abstract semilinear Cauchy problem in an appropriate Banach space to show the existence of solution and also show the existence of steady states. It is assumed in this work that the population is in a demographic stationary state and show that there is no disease-free equilibrium point as long as there is a transmission of infection due to the indirect contacts in the environment. |
2005.12441 | Eve Armstrong | Eve Armstrong, Manuela Runge, Jaline Gerardin | Identifying the measurements required to estimate rates of COVID-19
transmission, infection, and detection, using variational data assimilation | 9 pages, 7 figures | null | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We demonstrate the ability of statistical data assimilation to identify the
measurements required for accurate state and parameter estimation in an
epidemiological model for the novel coronavirus disease COVID-19. Our context
is an effort to inform policy regarding social behavior, to mitigate strain on
hospital capacity. The model unknowns are taken to be: the time-varying
transmission rate, the fraction of exposed cases that require hospitalization,
and the time-varying detection probabilities of new asymptomatic and
symptomatic cases. In simulations, we obtain accurate estimates of undetected
(that is, unmeasured) infectious populations, by measuring the detected cases
together with the recovered and dead - and without assumed knowledge of the
detection rates. Given a noiseless measurement of the recovered population,
excellent estimates of all quantities are obtained using a temporal baseline of
101 days, with the exception of the time-varying transmission rate at times
prior to the implementation of social distancing. With low noise added to the
recovered population, accurate state estimates require a lengthening of the
temporal baseline of measurements. Estimates of all parameters are sensitive to
the contamination, highlighting the need for accurate and uniform methods of
reporting. The aim of this paper is to exemplify the power of SDA to determine
what properties of measurements will yield estimates of unknown parameters to a
desired precision, in a model with the complexity required to capture important
features of the COVID-19 pandemic.
| [
{
"created": "Mon, 25 May 2020 23:29:26 GMT",
"version": "v1"
},
{
"created": "Fri, 31 Jul 2020 18:17:40 GMT",
"version": "v2"
}
] | 2020-08-04 | [
[
"Armstrong",
"Eve",
""
],
[
"Runge",
"Manuela",
""
],
[
"Gerardin",
"Jaline",
""
]
] | We demonstrate the ability of statistical data assimilation to identify the measurements required for accurate state and parameter estimation in an epidemiological model for the novel coronavirus disease COVID-19. Our context is an effort to inform policy regarding social behavior, to mitigate strain on hospital capacity. The model unknowns are taken to be: the time-varying transmission rate, the fraction of exposed cases that require hospitalization, and the time-varying detection probabilities of new asymptomatic and symptomatic cases. In simulations, we obtain accurate estimates of undetected (that is, unmeasured) infectious populations, by measuring the detected cases together with the recovered and dead - and without assumed knowledge of the detection rates. Given a noiseless measurement of the recovered population, excellent estimates of all quantities are obtained using a temporal baseline of 101 days, with the exception of the time-varying transmission rate at times prior to the implementation of social distancing. With low noise added to the recovered population, accurate state estimates require a lengthening of the temporal baseline of measurements. Estimates of all parameters are sensitive to the contamination, highlighting the need for accurate and uniform methods of reporting. The aim of this paper is to exemplify the power of SDA to determine what properties of measurements will yield estimates of unknown parameters to a desired precision, in a model with the complexity required to capture important features of the COVID-19 pandemic. |
2211.14169 | Tim Dockhorn | Karsten Kreis, Tim Dockhorn, Zihao Li, Ellen Zhong | Latent Space Diffusion Models of Cryo-EM Structures | Machine Learning for Structural Biology Workshop, NeurIPS 2022 (Oral) | null | null | null | q-bio.QM stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cryo-electron microscopy (cryo-EM) is unique among tools in structural
biology in its ability to image large, dynamic protein complexes. Key to this
ability is image processing algorithms for heterogeneous cryo-EM
reconstruction, including recent deep learning-based approaches. The
state-of-the-art method cryoDRGN uses a Variational Autoencoder (VAE) framework
to learn a continuous distribution of protein structures from single particle
cryo-EM imaging data. While cryoDRGN can model complex structural motions, the
Gaussian prior distribution of the VAE fails to match the aggregate approximate
posterior, which prevents generative sampling of structures especially for
multi-modal distributions (e.g. compositional heterogeneity). Here, we train a
diffusion model as an expressive, learnable prior in the cryoDRGN framework.
Our approach learns a high-quality generative model over molecular
conformations directly from cryo-EM imaging data. We show the ability to sample
from the model on two synthetic and two real datasets, where samples accurately
follow the data distribution unlike samples from the VAE prior distribution. We
also demonstrate how the diffusion model prior can be leveraged for fast latent
space traversal and interpolation between states of interest. By learning an
accurate model of the data distribution, our method unlocks tools in generative
modeling, sampling, and distribution analysis for heterogeneous cryo-EM
ensembles.
| [
{
"created": "Fri, 25 Nov 2022 15:17:10 GMT",
"version": "v1"
}
] | 2022-11-28 | [
[
"Kreis",
"Karsten",
""
],
[
"Dockhorn",
"Tim",
""
],
[
"Li",
"Zihao",
""
],
[
"Zhong",
"Ellen",
""
]
] | Cryo-electron microscopy (cryo-EM) is unique among tools in structural biology in its ability to image large, dynamic protein complexes. Key to this ability is image processing algorithms for heterogeneous cryo-EM reconstruction, including recent deep learning-based approaches. The state-of-the-art method cryoDRGN uses a Variational Autoencoder (VAE) framework to learn a continuous distribution of protein structures from single particle cryo-EM imaging data. While cryoDRGN can model complex structural motions, the Gaussian prior distribution of the VAE fails to match the aggregate approximate posterior, which prevents generative sampling of structures especially for multi-modal distributions (e.g. compositional heterogeneity). Here, we train a diffusion model as an expressive, learnable prior in the cryoDRGN framework. Our approach learns a high-quality generative model over molecular conformations directly from cryo-EM imaging data. We show the ability to sample from the model on two synthetic and two real datasets, where samples accurately follow the data distribution unlike samples from the VAE prior distribution. We also demonstrate how the diffusion model prior can be leveraged for fast latent space traversal and interpolation between states of interest. By learning an accurate model of the data distribution, our method unlocks tools in generative modeling, sampling, and distribution analysis for heterogeneous cryo-EM ensembles. |
1602.06404 | Da Zhou Dr. | Bin Wu, Shanjun Mao, Jiazeng Wang, Da Zhou | Control of epidemics via social partnership adjustment | 29 pages, 8 figures | Phys. Rev. E 94, 062314 (2016) | 10.1103/PhysRevE.94.062314 | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Epidemic control is of great importance for human society. Adjusting
interacting partners is an effective individualized control strategy.
Intuitively, it is done either by shortening the interaction time between
susceptible and infected individuals or by increasing the opportunities for
contact between susceptible individuals. Here, we provide a comparative study
on these two control strategies by establishing an epidemic model with
non-uniform stochastic interactions. It seems that the two strategies should be
similar, since shortening the interaction time between susceptible and infected
individuals somehow increases the chances for contact between susceptible
individuals. However, analytical results indicate that the effectiveness of the
former strategy sensitively depends on the infectious intensity and the
combinations of different interaction rates, whereas the latter one is quite
robust and efficient. Simulations are shown in comparison with our analytical
predictions. Our work may shed light on the strategic choice of disease
control.
| [
{
"created": "Sat, 20 Feb 2016 13:03:20 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Nov 2016 11:30:39 GMT",
"version": "v2"
},
{
"created": "Sat, 24 Dec 2016 07:05:37 GMT",
"version": "v3"
}
] | 2016-12-28 | [
[
"Wu",
"Bin",
""
],
[
"Mao",
"Shanjun",
""
],
[
"Wang",
"Jiazeng",
""
],
[
"Zhou",
"Da",
""
]
] | Epidemic control is of great importance for human society. Adjusting interacting partners is an effective individualized control strategy. Intuitively, it is done either by shortening the interaction time between susceptible and infected individuals or by increasing the opportunities for contact between susceptible individuals. Here, we provide a comparative study on these two control strategies by establishing an epidemic model with non-uniform stochastic interactions. It seems that the two strategies should be similar, since shortening the interaction time between susceptible and infected individuals somehow increases the chances for contact between susceptible individuals. However, analytical results indicate that the effectiveness of the former strategy sensitively depends on the infectious intensity and the combinations of different interaction rates, whereas the latter one is quite robust and efficient. Simulations are shown in comparison with our analytical predictions. Our work may shed light on the strategic choice of disease control. |
1808.01963 | Romain M. Yvinec | Mohammed Akli Ayoub, Romain Yvinec, Pascale Cr\'epieux and Anne Poupon | Computational modeling approaches in gonadotropin signaling | null | Theriogenology, 86(1):22-31 (2016) | 10.1016/j.theriogenology.2016.04.015 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Follicle-stimulating hormone (FSH) and luteinizing hormone (LH) play
essential roles in animal reproduction. They exert their function through
binding to their cognate receptors, which belong to the large family of G
protein-coupled receptors (GPCRs). This recognition at the plasma membrane
triggers a plethora of cellular events, whose processing and integration
ultimately lead to an adapted biological response. Understanding the nature and
the kinetics of these events is essential for innovative approaches in drug
discovery. The study and manipulation of such complex systems requires the use
of computational modeling approaches combined with robust in vitro functional
assays for calibration and validation. Modeling brings a detailed understanding
of the system and can also be used to understand why existing drugs do not work
as well as expected, and how to design more efficient ones.
| [
{
"created": "Tue, 31 Jul 2018 13:29:50 GMT",
"version": "v1"
}
] | 2018-08-07 | [
[
"Ayoub",
"Mohammed Akli",
""
],
[
"Yvinec",
"Romain",
""
],
[
"Crépieux",
"Pascale",
""
],
[
"Poupon",
"Anne",
""
]
] | Follicle-stimulating hormone (FSH) and luteinizing hormone (LH) play essential roles in animal reproduction. They exert their function through binding to their cognate receptors, which belong to the large family of G protein-coupled receptors (GPCRs). This recognition at the plasma membrane triggers a plethora of cellular events, whose processing and integration ultimately lead to an adapted biological response. Understanding the nature and the kinetics of these events is essential for innovative approaches in drug discovery. The study and manipulation of such complex systems requires the use of computational modeling approaches combined with robust in vitro functional assays for calibration and validation. Modeling brings a detailed understanding of the system and can also be used to understand why existing drugs do not work as well as expected, and how to design more efficient ones. |
1104.2515 | Alain Barrat | L. Isella, M. Romano, A. Barrat, C. Cattuto, V. Colizza, W. Van den
Broeck, F. Gesualdo, E. Pandolfi, L. Rav\`a, C. Rizzo, A.E. Tozzi | Close encounters in a pediatric ward: measuring face-to-face proximity
and mixing patterns with wearable sensors | null | PLoS ONE 6(2): e17144 (2011) | 10.1371/journal.pone.0017144 | null | q-bio.QM cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nosocomial infections place a substantial burden on health care systems and
represent a major issue in current public health, requiring notable efforts for
its prevention. Understanding the dynamics of infection transmission in a
hospital setting is essential for tailoring interventions and predicting the
spread among individuals. Mathematical models need to be informed with accurate
data on contacts among individuals. We used wearable active Radio-Frequency
Identification Devices to detect face-to-face contacts among individuals with a
spatial resolution of about 1.5 meters, and a time resolution of 20 seconds.
The study was conducted in a general pediatrics hospital ward, during a
one-week period, and included 119 participants. Nearly 16,000 contacts were
recorded during the study, with a median of approximately 20 contacts per
participants per day. Overall, 25% of the contacts involved a ward assistant,
23% a nurse, 22% a patient, 22% a caregiver, and 8% a physician. The majority
of contacts were of brief duration, but long and frequent contacts especially
between patients and caregivers were also found. In the setting under study,
caregivers do not represent a significant potential for infection spread to a
large number of individuals, as their interactions mainly involve the
corresponding patient. Nurses would deserve priority in prevention strategies
due to their central role in the potential propagation paths of infections. Our
study shows the feasibility of accurate and reproducible measures of the
pattern of contacts in a hospital setting. The results are particularly useful
for the study of the spread of respiratory infections, for monitoring critical
patterns, and for setting up tailored prevention strategies. Proximity-sensing
technology should be considered as a valuable tool for measuring such patterns
and evaluating nosocomial prevention strategies in specific settings.
| [
{
"created": "Wed, 13 Apr 2011 14:42:48 GMT",
"version": "v1"
}
] | 2011-04-14 | [
[
"Isella",
"L.",
""
],
[
"Romano",
"M.",
""
],
[
"Barrat",
"A.",
""
],
[
"Cattuto",
"C.",
""
],
[
"Colizza",
"V.",
""
],
[
"Broeck",
"W. Van den",
""
],
[
"Gesualdo",
"F.",
""
],
[
"Pandolfi",
"E.",
""
],
[
"Ravà",
"L.",
""
],
[
"Rizzo",
"C.",
""
],
[
"Tozzi",
"A. E.",
""
]
] | Nosocomial infections place a substantial burden on health care systems and represent a major issue in current public health, requiring notable efforts for its prevention. Understanding the dynamics of infection transmission in a hospital setting is essential for tailoring interventions and predicting the spread among individuals. Mathematical models need to be informed with accurate data on contacts among individuals. We used wearable active Radio-Frequency Identification Devices to detect face-to-face contacts among individuals with a spatial resolution of about 1.5 meters, and a time resolution of 20 seconds. The study was conducted in a general pediatrics hospital ward, during a one-week period, and included 119 participants. Nearly 16,000 contacts were recorded during the study, with a median of approximately 20 contacts per participants per day. Overall, 25% of the contacts involved a ward assistant, 23% a nurse, 22% a patient, 22% a caregiver, and 8% a physician. The majority of contacts were of brief duration, but long and frequent contacts especially between patients and caregivers were also found. In the setting under study, caregivers do not represent a significant potential for infection spread to a large number of individuals, as their interactions mainly involve the corresponding patient. Nurses would deserve priority in prevention strategies due to their central role in the potential propagation paths of infections. Our study shows the feasibility of accurate and reproducible measures of the pattern of contacts in a hospital setting. The results are particularly useful for the study of the spread of respiratory infections, for monitoring critical patterns, and for setting up tailored prevention strategies. Proximity-sensing technology should be considered as a valuable tool for measuring such patterns and evaluating nosocomial prevention strategies in specific settings. |
1912.02138 | Jes\'us Fern\'andez-S\'anchez | Marta Casanellas, Jes\'us Fern\'andez-S\'anchez, Marina
Garrote-L\'opez | Distance to the stochastic part of phylogenetic varieties | 33 pages; 11 figures; to appear in Journal of Symbolic Computation | null | 10.1016/j.jsc.2020.09.003 | null | q-bio.PE math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modelling the substitution of nucleotides along a phylogenetic tree is
usually done by a hidden Markov process. This allows to define a distribution
of characters at the leaves of the trees and one might be able to obtain
polynomial relationships among the probabilities of different characters. The
study of these polynomials and the geometry of the algebraic varieties defined
by them can be used to reconstruct phylogenetic trees. However, not all points
in these algebraic varieties have biological sense. In this paper, we explore
the extent to which adding semi-algebraic conditions arising from the
restriction to parameters with statistical meaning can improve existing methods
of phylogenetic reconstruction. To this end, our aim is to compute the distance
of data points to algebraic varieties and to the stochastic part of these
varieties. Computing these distances involves optimization by nonlinear
programming algorithms. We use analytical methods to find some of these
distances for quartet trees evolving under the Kimura 3-parameter or the
Jukes-Cantor models. Numerical algebraic geometry and computational algebra
play also a fundamental role in this paper.
| [
{
"created": "Wed, 4 Dec 2019 17:41:11 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Oct 2020 06:57:25 GMT",
"version": "v2"
}
] | 2020-10-12 | [
[
"Casanellas",
"Marta",
""
],
[
"Fernández-Sánchez",
"Jesús",
""
],
[
"Garrote-López",
"Marina",
""
]
] | Modelling the substitution of nucleotides along a phylogenetic tree is usually done by a hidden Markov process. This allows to define a distribution of characters at the leaves of the trees and one might be able to obtain polynomial relationships among the probabilities of different characters. The study of these polynomials and the geometry of the algebraic varieties defined by them can be used to reconstruct phylogenetic trees. However, not all points in these algebraic varieties have biological sense. In this paper, we explore the extent to which adding semi-algebraic conditions arising from the restriction to parameters with statistical meaning can improve existing methods of phylogenetic reconstruction. To this end, our aim is to compute the distance of data points to algebraic varieties and to the stochastic part of these varieties. Computing these distances involves optimization by nonlinear programming algorithms. We use analytical methods to find some of these distances for quartet trees evolving under the Kimura 3-parameter or the Jukes-Cantor models. Numerical algebraic geometry and computational algebra play also a fundamental role in this paper. |
q-bio/0609051 | Dietrich Stauffer | Krzysztof Malarz and Dietrich Stauffer | Search for bottleneck effects in Penna ageing and Schulze language model | RevTeX4, 2 pages, 2 figures with 3 eps files | Adv. Complex Syst. 11 (2008) 165 | 10.1142/S0219525908001489 | null | q-bio.PE physics.soc-ph | null | No influence was seen when in two models with memory effects the populations
were drastically decreased after equilibrium was established, and then allowed
to increase again.
| [
{
"created": "Thu, 28 Sep 2006 13:37:03 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Sep 2006 20:42:16 GMT",
"version": "v2"
}
] | 2008-03-16 | [
[
"Malarz",
"Krzysztof",
""
],
[
"Stauffer",
"Dietrich",
""
]
] | No influence was seen when in two models with memory effects the populations were drastically decreased after equilibrium was established, and then allowed to increase again. |
1802.08683 | Henry Van Den Bedem | Dominik Budday, Sigrid Leyendecker and Henry van den Bedem | Kinematic Flexibility Analysis: Hydrogen Bonding Patterns Impart a
Spatial Hierarchy of Protein Motion | null | null | null | null | q-bio.BM cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Elastic network models (ENM) and constraint-based, topological rigidity
analysis are two distinct, coarse-grained approaches to study conformational
flexibility of macromolecules. In the two decades since their introduction,
both have contributed significantly to insights into protein molecular
mechanisms and function. However, despite a shared purpose of these approaches,
the topological nature of rigidity analysis, and thereby the absence of motion
modes, has impeded a direct comparison. Here, we present an alternative,
kinematic approach to rigidity analysis, which circumvents these drawbacks. We
introduce a novel protein hydrogen bond network spectral decomposition, which
provides an orthonormal basis for collective motions modulated by non-covalent
interactions, analogous to the eigenspectrum of normal modes, and decomposes
proteins into rigid clusters identical to those from topological rigidity. Our
kinematic flexibility analysis bridges topological rigidity theory and ENM, and
enables a detailed analysis of motion modes obtained from both approaches. Our
analysis reveals that collectivity of protein motions, reported by the Shannon
entropy, is significantly lower for rigidity theory versus normal mode
approaches. Strikingly, kinematic flexibility analysis suggests that the
hydrogen bonding network encodes a protein-fold specific, spatial hierarchy of
motions, which goes nearly undetected in ENM. This hierarchy reveals distinct
motion regimes that rationalize protein stiffness changes observed from
experiment and molecular dynamics simulations. A formal expression for changes
in free energy derived from the spectral decomposition indicates that motions
across nearly 40% of modes obey enthalpy-entropy compensation. Taken together,
our analysis suggests that hydrogen bond networks have evolved to modulate
protein structure and dynamics.
| [
{
"created": "Fri, 23 Feb 2018 06:08:13 GMT",
"version": "v1"
}
] | 2018-02-27 | [
[
"Budday",
"Dominik",
""
],
[
"Leyendecker",
"Sigrid",
""
],
[
"Bedem",
"Henry van den",
""
]
] | Elastic network models (ENM) and constraint-based, topological rigidity analysis are two distinct, coarse-grained approaches to study conformational flexibility of macromolecules. In the two decades since their introduction, both have contributed significantly to insights into protein molecular mechanisms and function. However, despite a shared purpose of these approaches, the topological nature of rigidity analysis, and thereby the absence of motion modes, has impeded a direct comparison. Here, we present an alternative, kinematic approach to rigidity analysis, which circumvents these drawbacks. We introduce a novel protein hydrogen bond network spectral decomposition, which provides an orthonormal basis for collective motions modulated by non-covalent interactions, analogous to the eigenspectrum of normal modes, and decomposes proteins into rigid clusters identical to those from topological rigidity. Our kinematic flexibility analysis bridges topological rigidity theory and ENM, and enables a detailed analysis of motion modes obtained from both approaches. Our analysis reveals that collectivity of protein motions, reported by the Shannon entropy, is significantly lower for rigidity theory versus normal mode approaches. Strikingly, kinematic flexibility analysis suggests that the hydrogen bonding network encodes a protein-fold specific, spatial hierarchy of motions, which goes nearly undetected in ENM. This hierarchy reveals distinct motion regimes that rationalize protein stiffness changes observed from experiment and molecular dynamics simulations. A formal expression for changes in free energy derived from the spectral decomposition indicates that motions across nearly 40% of modes obey enthalpy-entropy compensation. Taken together, our analysis suggests that hydrogen bond networks have evolved to modulate protein structure and dynamics. |
1802.10451 | Julien Modolo | Julien Modolo, Mahmoud Hassan, Alexandre Legros | Reconstruction of brain networks involved in magnetophosphene perception
using dense electroencephalography | 6 pages, 1 figure | BioEM2018 conference proceedings | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Characterizing functional brain networks in humans during magnetophosphene
perception. Dense electroencephalography (EEG, 128 channels) was performed in
N=3 volunteers during high-level (50 mT) magnetic field (MF) exposure.
Functional brain networks were reconstructed, at the cortical level from scalp
recordings, using the EEG source connectivity method. Magnetophosphene
perception appears to consistently activate the right inferior
occipito-temporal pathway. This study provides the very first neuroimaging
results characterizing magnetophosphene perception in humans. The use of
dense-EEG source connectivity is a promising approach in the field of
bioelectromagnetics.
| [
{
"created": "Wed, 28 Feb 2018 14:52:07 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Mar 2018 08:39:41 GMT",
"version": "v2"
}
] | 2018-03-02 | [
[
"Modolo",
"Julien",
""
],
[
"Hassan",
"Mahmoud",
""
],
[
"Legros",
"Alexandre",
""
]
] | Characterizing functional brain networks in humans during magnetophosphene perception. Dense electroencephalography (EEG, 128 channels) was performed in N=3 volunteers during high-level (50 mT) magnetic field (MF) exposure. Functional brain networks were reconstructed, at the cortical level from scalp recordings, using the EEG source connectivity method. Magnetophosphene perception appears to consistently activate the right inferior occipito-temporal pathway. This study provides the very first neuroimaging results characterizing magnetophosphene perception in humans. The use of dense-EEG source connectivity is a promising approach in the field of bioelectromagnetics. |
2204.12861 | Hendrik Richter | Hendrik Richter | Spectral dynamics of guided edge removals and identifying transient
amplifiers for death-Birth updating | null | null | null | null | q-bio.PE cs.SI math.CO | http://creativecommons.org/licenses/by/4.0/ | The paper deals with two interrelated topics, identifying transient
amplifiers in an iterative process and analyzing the process by its spectral
dynamics, which is the change in the graph spectra by edge manipulations.
Transient amplifiers are networks representing population structures which
shift the balance between natural selection and random drift. Thus, amplifiers
are highly relevant for understanding the relationships between spatial
structures and evolutionary dynamics. We study an iterative procedure to
identify transient amplifiers for death-Birth updating. The algorithm starts
with a regular input graph and iteratively removes edges until desired
structures are achieved. Thus, a sequence of candidate graphs is obtained. The
edge removals are guided by quantities derived from the sequence of candidate
graphs. Moreover, we are interested in the Laplacian spectra of the candidate
graphs and analyze the iterative process by its spectral dynamics. The results
show that although transient amplifiers for death-Birth updating are rare, a
substantial number of them can be obtained by the proposed procedure. The
graphs identified share structural properties and have some similarity to
dumbbell and barbell graphs. Also, the spectral dynamics possesses
characteristic features useful for deducing links between structural and
spectral properties and for distinguishing transient amplifiers among
evolutionary graphs in general.
| [
{
"created": "Wed, 27 Apr 2022 11:50:12 GMT",
"version": "v1"
}
] | 2022-04-28 | [
[
"Richter",
"Hendrik",
""
]
] | The paper deals with two interrelated topics, identifying transient amplifiers in an iterative process and analyzing the process by its spectral dynamics, which is the change in the graph spectra by edge manipulations. Transient amplifiers are networks representing population structures which shift the balance between natural selection and random drift. Thus, amplifiers are highly relevant for understanding the relationships between spatial structures and evolutionary dynamics. We study an iterative procedure to identify transient amplifiers for death-Birth updating. The algorithm starts with a regular input graph and iteratively removes edges until desired structures are achieved. Thus, a sequence of candidate graphs is obtained. The edge removals are guided by quantities derived from the sequence of candidate graphs. Moreover, we are interested in the Laplacian spectra of the candidate graphs and analyze the iterative process by its spectral dynamics. The results show that although transient amplifiers for death-Birth updating are rare, a substantial number of them can be obtained by the proposed procedure. The graphs identified share structural properties and have some similarity to dumbbell and barbell graphs. Also, the spectral dynamics possesses characteristic features useful for deducing links between structural and spectral properties and for distinguishing transient amplifiers among evolutionary graphs in general. |
1709.07319 | Michael Hayashi | Michael A.L. Hayashi and Marisa C. Eisenberg | Changing Burial Practices Explain Temporal Trends in the 2014 Ebola
Outbreak | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: The 2014 Ebola outbreak in West Africa was the largest on record,
resulting in over 25,000 total infections and 15,000 total deaths. Mathematical
modeling can be used to investigate the mechanisms driving transmission during
this outbreak -- in particular, burial practices appear to have been major
source of infections. Methodology/Principal Findings: We developed a
multi-stage model of Ebola virus transmission linked to a game-theoretic model
of population burial practice selection. We fit our model to cumulative
incidence and mortality data from Guinea, Liberia, and Sierra Leone from
January 2014 to March, 2016. The inclusion of behavior change substantially
improved best fit estimates and final size prediction compared to a reduced
model with fixed burials. Best fit trajectories suggest that the majority of
sanitary burial adoption occurred between July, 2014 and October, 2014.
However, these simulations also indicated that continued sanitary burial
practices waned following the resolution of the outbreak.
Conclusions/Significance: Surveillance data from the 2014 outbreak appears to
have a signal of changes in the dominant burial practices in all three
countries. Increased adoption of sanitary burials likely attenuated
transmission, but these changes occurred too late to prevent the explosive
growth of the outbreak during its early phase. For future outbreaks, explicitly
modeling behavior change and collecting data on transmission-related behaviors
may improve intervention planning and outbreak response.
| [
{
"created": "Wed, 20 Sep 2017 17:24:05 GMT",
"version": "v1"
}
] | 2017-09-22 | [
[
"Hayashi",
"Michael A. L.",
""
],
[
"Eisenberg",
"Marisa C.",
""
]
] | Background: The 2014 Ebola outbreak in West Africa was the largest on record, resulting in over 25,000 total infections and 15,000 total deaths. Mathematical modeling can be used to investigate the mechanisms driving transmission during this outbreak -- in particular, burial practices appear to have been major source of infections. Methodology/Principal Findings: We developed a multi-stage model of Ebola virus transmission linked to a game-theoretic model of population burial practice selection. We fit our model to cumulative incidence and mortality data from Guinea, Liberia, and Sierra Leone from January 2014 to March, 2016. The inclusion of behavior change substantially improved best fit estimates and final size prediction compared to a reduced model with fixed burials. Best fit trajectories suggest that the majority of sanitary burial adoption occurred between July, 2014 and October, 2014. However, these simulations also indicated that continued sanitary burial practices waned following the resolution of the outbreak. Conclusions/Significance: Surveillance data from the 2014 outbreak appears to have a signal of changes in the dominant burial practices in all three countries. Increased adoption of sanitary burials likely attenuated transmission, but these changes occurred too late to prevent the explosive growth of the outbreak during its early phase. For future outbreaks, explicitly modeling behavior change and collecting data on transmission-related behaviors may improve intervention planning and outbreak response. |
2301.07171 | Kaichao Wu | Kaichao Wu, Beth Jelfs, Katrina Neville, John Q. Fang | fMRI-based Static and Dynamic Functional Connectivity Analysis for
Post-stroke Motor Dysfunction Patient: A Review | Review paper | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Functional magnetic resonance imaging (fMRI) has been widely utilized to
study the motor deficits and rehabilitation following stroke. In particular,
functional connectivity(FC) analyses with fMRI at rest can be employed to
reveal the neural connectivity rationale behind this post-stroke motor function
impairment and recovery. However, the methods and findings have not been
summarized in a review focusing on post-stroke functional connectivity
analysis. In this context, we broadly review the static functional connectivity
network analysis (SFC) and dynamic functional connectivity network analysis
(DFC) for post-stroke motor dysfunction patients, aiming to provide method
guides and the latest findings regarding post-stroke motor function recovery.
Specifically, a brief overview of the SFC and DFC methods for fMRI analysis is
provided, along with the preprocessing and denoising procedures that go into
these methods. Following that, the current status of research in functional
connectivity networks for post-stoke patients under these two views was
synthesized individually. Results show that SFC is the most frequent
post-stroke functional connectivity analysis method. The SFC findings
demonstrate that the stroke lesion reduces FC between motor areas, and the FC
increase positively correlates with functional recovery. Meanwhile, the current
DFC analysis in post-stroke has just been uncovered as the tip of the iceberg
of its prospect, and its exceptionally rapidly progressing development can be
expected.
| [
{
"created": "Thu, 15 Dec 2022 04:16:07 GMT",
"version": "v1"
}
] | 2023-01-19 | [
[
"Wu",
"Kaichao",
""
],
[
"Jelfs",
"Beth",
""
],
[
"Neville",
"Katrina",
""
],
[
"Fang",
"John Q.",
""
]
] | Functional magnetic resonance imaging (fMRI) has been widely utilized to study the motor deficits and rehabilitation following stroke. In particular, functional connectivity(FC) analyses with fMRI at rest can be employed to reveal the neural connectivity rationale behind this post-stroke motor function impairment and recovery. However, the methods and findings have not been summarized in a review focusing on post-stroke functional connectivity analysis. In this context, we broadly review the static functional connectivity network analysis (SFC) and dynamic functional connectivity network analysis (DFC) for post-stroke motor dysfunction patients, aiming to provide method guides and the latest findings regarding post-stroke motor function recovery. Specifically, a brief overview of the SFC and DFC methods for fMRI analysis is provided, along with the preprocessing and denoising procedures that go into these methods. Following that, the current status of research in functional connectivity networks for post-stoke patients under these two views was synthesized individually. Results show that SFC is the most frequent post-stroke functional connectivity analysis method. The SFC findings demonstrate that the stroke lesion reduces FC between motor areas, and the FC increase positively correlates with functional recovery. Meanwhile, the current DFC analysis in post-stroke has just been uncovered as the tip of the iceberg of its prospect, and its exceptionally rapidly progressing development can be expected. |
1901.07461 | Farshid Alambeigi | Mahsan Bakhtiarinejad, Farshid Alambeigi, Alireza Chamani, Mathias
Unberath, Harpal Khanuja, Mehran Armand | A Biomechanical Study on the Use of Curved Drilling Technique for
Treatment of Osteonecrosis of Femoral Head | Accepted for 2018 MICCAI Workshop on Computational Biomechanics for
Medicine XIII | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Osteonecrosis occurs due to the loss of blood supply to the bone, leading to
spontaneous death of the trabecular bone. Delayed treatment of the involved
patients results in collapse of the femoral head, which leads to a need for
total hip arthroplasty surgery. Core decompression, as the most popular
technique for treatment of the osteonecrosis, includes removal of the lesion
area by drilling a straight tunnel to the lesion, debriding the dead bone and
replacing it with bone substitutes. However, there are two drawbacks for this
treatment method. First, due to the rigidity of the instruments currently used
during core decompression, lesions cannot be completely removed and/or
excessive healthy bone may also be removed with the lesion. Second, the use of
bone substitutes, despite its biocompatibility and osteoconductivity, may not
provide sufficient mechanical strength and support for the bone. To address
these shortcomings, a novel robot-assisted curved core decompression (CCD)
technique is introduced to provide surgeons with direct access to the lesions
causing minimal damage to the healthy bone. In this study, with the aid of
finite element (FE) simulations, we investigate biomechanical performance of
core decompression using the curved drilling technique in the presence of
normal gait loading. In this regard, we compare the result of the CCD using
bone substitutes and flexible implants with other conventional core
decompression techniques. The study finding shows that the maximum principal
stress occurring at the superior domain of the neck is smaller in the CCD
techniques (i.e. 52.847 MPa) compared to the other core decompression methods.
| [
{
"created": "Mon, 14 Jan 2019 05:15:35 GMT",
"version": "v1"
}
] | 2019-01-23 | [
[
"Bakhtiarinejad",
"Mahsan",
""
],
[
"Alambeigi",
"Farshid",
""
],
[
"Chamani",
"Alireza",
""
],
[
"Unberath",
"Mathias",
""
],
[
"Khanuja",
"Harpal",
""
],
[
"Armand",
"Mehran",
""
]
] | Osteonecrosis occurs due to the loss of blood supply to the bone, leading to spontaneous death of the trabecular bone. Delayed treatment of the involved patients results in collapse of the femoral head, which leads to a need for total hip arthroplasty surgery. Core decompression, as the most popular technique for treatment of the osteonecrosis, includes removal of the lesion area by drilling a straight tunnel to the lesion, debriding the dead bone and replacing it with bone substitutes. However, there are two drawbacks for this treatment method. First, due to the rigidity of the instruments currently used during core decompression, lesions cannot be completely removed and/or excessive healthy bone may also be removed with the lesion. Second, the use of bone substitutes, despite its biocompatibility and osteoconductivity, may not provide sufficient mechanical strength and support for the bone. To address these shortcomings, a novel robot-assisted curved core decompression (CCD) technique is introduced to provide surgeons with direct access to the lesions causing minimal damage to the healthy bone. In this study, with the aid of finite element (FE) simulations, we investigate biomechanical performance of core decompression using the curved drilling technique in the presence of normal gait loading. In this regard, we compare the result of the CCD using bone substitutes and flexible implants with other conventional core decompression techniques. The study finding shows that the maximum principal stress occurring at the superior domain of the neck is smaller in the CCD techniques (i.e. 52.847 MPa) compared to the other core decompression methods. |
2311.00975 | William Poole | William Poole, Thomas E. Ouldridge, and Manoj Gopalkrishnan | Autonomous Learning of Generative Models with Chemical Reaction Network
Ensembles | null | null | null | null | q-bio.MN cs.ET cs.LG cs.NE cs.SY eess.SY physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Can a micron sized sack of interacting molecules autonomously learn an
internal model of a complex and fluctuating environment? We draw insights from
control theory, machine learning theory, chemical reaction network theory, and
statistical physics to develop a general architecture whereby a broad class of
chemical systems can autonomously learn complex distributions. Our construction
takes the form of a chemical implementation of machine learning's optimization
workhorse: gradient descent on the relative entropy cost function. We show how
this method can be applied to optimize any detailed balanced chemical reaction
network and that the construction is capable of using hidden units to learn
complex distributions. This result is then recast as a form of integral
feedback control. Finally, due to our use of an explicit physical model of
learning, we are able to derive thermodynamic costs and trade-offs associated
to this process.
| [
{
"created": "Thu, 2 Nov 2023 03:46:23 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Nov 2023 19:07:59 GMT",
"version": "v2"
}
] | 2023-11-08 | [
[
"Poole",
"William",
""
],
[
"Ouldridge",
"Thomas E.",
""
],
[
"Gopalkrishnan",
"Manoj",
""
]
] | Can a micron sized sack of interacting molecules autonomously learn an internal model of a complex and fluctuating environment? We draw insights from control theory, machine learning theory, chemical reaction network theory, and statistical physics to develop a general architecture whereby a broad class of chemical systems can autonomously learn complex distributions. Our construction takes the form of a chemical implementation of machine learning's optimization workhorse: gradient descent on the relative entropy cost function. We show how this method can be applied to optimize any detailed balanced chemical reaction network and that the construction is capable of using hidden units to learn complex distributions. This result is then recast as a form of integral feedback control. Finally, due to our use of an explicit physical model of learning, we are able to derive thermodynamic costs and trade-offs associated to this process. |
2312.03773 | Li Pan | Haoyue Wang, Li Pan, Bo Yang, Junqiang Jiang and Wenbin Li | A multi-layer refined network model for the identification of essential
proteins | null | null | null | null | q-bio.MN cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The identification of essential proteins in protein-protein interaction
networks (PINs) can help to discover drug targets and prevent disease. In order
to improve the accuracy of the identification of essential proteins,
researchers attempted to obtain a refined PIN by combining multiple biological
information to filter out some unreliable interactions in the PIN.
Unfortunately, such approaches drastically reduce the number of nodes in the
PIN after multiple refinements and result in a sparser PIN. It makes a
considerable portion of essential proteins unidentifiable. In this paper, we
propose a multi-layer refined network (MR-PIN) that addresses this problem.
Firstly, four refined networks are constructed by respectively integrating
different biological information into the static PIN to form a multi-layer
heterogeneous network. Then scores of proteins in each network layer are
calculated by the existing node ranking method, and the importance score of a
protein in the MR-PIN is evaluated in terms of the geometric mean of its scores
in all layers. Finally, all nodes are sorted by their importance scores to
determine their essentiality. To evaluate the effectiveness of the multi-layer
refined network model, we apply 16 node ranking methods on the MR-PIN, and
compare the results with those on the SPIN, DPIN and RDPIN. Then the predictive
performances of these ranking methods are validated in terms of the
identification number of essential protein at top100 - top600, sensitivity,
specificity, positive predictive value, negative predictive value, F-measure,
accuracy, Jackknife, ROCAUC and PRAUC. The experimental results show that the
MR-PIN is superior to the existing refined PINs in the identification accuracy
of essential proteins.
| [
{
"created": "Wed, 6 Dec 2023 00:11:08 GMT",
"version": "v1"
}
] | 2023-12-08 | [
[
"Wang",
"Haoyue",
""
],
[
"Pan",
"Li",
""
],
[
"Yang",
"Bo",
""
],
[
"Jiang",
"Junqiang",
""
],
[
"Li",
"Wenbin",
""
]
] | The identification of essential proteins in protein-protein interaction networks (PINs) can help to discover drug targets and prevent disease. In order to improve the accuracy of the identification of essential proteins, researchers attempted to obtain a refined PIN by combining multiple biological information to filter out some unreliable interactions in the PIN. Unfortunately, such approaches drastically reduce the number of nodes in the PIN after multiple refinements and result in a sparser PIN. It makes a considerable portion of essential proteins unidentifiable. In this paper, we propose a multi-layer refined network (MR-PIN) that addresses this problem. Firstly, four refined networks are constructed by respectively integrating different biological information into the static PIN to form a multi-layer heterogeneous network. Then scores of proteins in each network layer are calculated by the existing node ranking method, and the importance score of a protein in the MR-PIN is evaluated in terms of the geometric mean of its scores in all layers. Finally, all nodes are sorted by their importance scores to determine their essentiality. To evaluate the effectiveness of the multi-layer refined network model, we apply 16 node ranking methods on the MR-PIN, and compare the results with those on the SPIN, DPIN and RDPIN. Then the predictive performances of these ranking methods are validated in terms of the identification number of essential protein at top100 - top600, sensitivity, specificity, positive predictive value, negative predictive value, F-measure, accuracy, Jackknife, ROCAUC and PRAUC. The experimental results show that the MR-PIN is superior to the existing refined PINs in the identification accuracy of essential proteins. |
2210.03745 | Dawid Rymarczyk | Dawid Rymarczyk, Daniel Dobrowolski, Tomasz Danel | ProGReST: Prototypical Graph Regression Soft Trees for Molecular
Property Prediction | Accepted to SDM2023 | null | null | null | q-bio.QM cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | In this work, we propose the novel Prototypical Graph Regression
Self-explainable Trees (ProGReST) model, which combines prototype learning,
soft decision trees, and Graph Neural Networks. In contrast to other works, our
model can be used to address various challenging tasks, including compound
property prediction. In ProGReST, the rationale is obtained along with
prediction due to the model's built-in interpretability. Additionally, we
introduce a new graph prototype projection to accelerate model training.
Finally, we evaluate PRoGReST on a wide range of chemical datasets for
molecular property prediction and perform in-depth analysis with chemical
experts to evaluate obtained interpretations. Our method achieves competitive
results against state-of-the-art methods.
| [
{
"created": "Fri, 7 Oct 2022 10:21:24 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Dec 2022 11:21:02 GMT",
"version": "v2"
}
] | 2022-12-29 | [
[
"Rymarczyk",
"Dawid",
""
],
[
"Dobrowolski",
"Daniel",
""
],
[
"Danel",
"Tomasz",
""
]
] | In this work, we propose the novel Prototypical Graph Regression Self-explainable Trees (ProGReST) model, which combines prototype learning, soft decision trees, and Graph Neural Networks. In contrast to other works, our model can be used to address various challenging tasks, including compound property prediction. In ProGReST, the rationale is obtained along with prediction due to the model's built-in interpretability. Additionally, we introduce a new graph prototype projection to accelerate model training. Finally, we evaluate PRoGReST on a wide range of chemical datasets for molecular property prediction and perform in-depth analysis with chemical experts to evaluate obtained interpretations. Our method achieves competitive results against state-of-the-art methods. |
1311.1345 | Marc-Oliver Gewaltig | Marc-Oliver Gewaltig | Self-sustained activity, bursts, and variability in recurrent networks | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is consensus in the current literature that stable states of
asynchronous irregular spiking activity require (i) large networks of 10 000 or
more neurons and (ii) external background activity or pacemaker neurons. Yet
already in 1963, Griffith showed that networks of simple threshold elements can
be persistently active at intermediate rates. Here, we extend Griffith's work
and demonstrate that sparse networks of integrate-and-fire neurons assume
stable states of self-sustained asynchronous and irregular firing without
external input or pacemaker neurons. These states can be robustly induced by a
brief pulse to a small fraction of the neurons, or by short a period of
irregular input, and last for several minutes. Self-sustained activity states
emerge when a small fraction of the synapses is strong enough to significantly
influence the firing probability of a neuron, consistent with the recently
proposed long-tailed distribution of synaptic weights. During self-sustained
activity, each neuron exhibits highly irregular firing patterns, similar to
experimentally observed activity. Moreover, the interspike interval
distribution reveals that neurons switch between discrete states of high and
low firing rates. We find that self-sustained activity states can exist even in
small networks of only a thousand neurons. We investigated networks up to 100
000 neurons. Finally, we discuss the implications of self-sustained activity
for learning, memory and signal propagation.
| [
{
"created": "Wed, 6 Nov 2013 10:58:04 GMT",
"version": "v1"
}
] | 2013-11-07 | [
[
"Gewaltig",
"Marc-Oliver",
""
]
] | There is consensus in the current literature that stable states of asynchronous irregular spiking activity require (i) large networks of 10 000 or more neurons and (ii) external background activity or pacemaker neurons. Yet already in 1963, Griffith showed that networks of simple threshold elements can be persistently active at intermediate rates. Here, we extend Griffith's work and demonstrate that sparse networks of integrate-and-fire neurons assume stable states of self-sustained asynchronous and irregular firing without external input or pacemaker neurons. These states can be robustly induced by a brief pulse to a small fraction of the neurons, or by short a period of irregular input, and last for several minutes. Self-sustained activity states emerge when a small fraction of the synapses is strong enough to significantly influence the firing probability of a neuron, consistent with the recently proposed long-tailed distribution of synaptic weights. During self-sustained activity, each neuron exhibits highly irregular firing patterns, similar to experimentally observed activity. Moreover, the interspike interval distribution reveals that neurons switch between discrete states of high and low firing rates. We find that self-sustained activity states can exist even in small networks of only a thousand neurons. We investigated networks up to 100 000 neurons. Finally, we discuss the implications of self-sustained activity for learning, memory and signal propagation. |
q-bio/0702030 | Tristan Ursell | Tristan Ursell, Rob Phillips, Jane' Kondev, Dan Reeves, Paul A.
Wiggins | The Role of Lipid Bilayer Mechanics in Mechanosensation | 38 pages, 14 figures, 81 references, accepted for publication in
"Mechanosensitivity in Cells and Tissues", Springer (2007) | null | null | null | q-bio.SC q-bio.QM | null | Mechanosensation is a key part of the sensory repertoire of a vast array of
different cells and organisms. The molecular dissection of the origins of
mechanosensation is rapidly advancing as a result of both structural and
functional studies. One intriguing mode of mechanosensation results from
tension in the membrane of the cell (or vesicle) of interest. The aim of this
review is to catalogue recent work that uses a mix of continuum and statistical
mechanics to explore the role of the lipid bilayer in the function of
mechanosensitive channels that respond to membrane tension. The role of bilayer
deformation will be explored in the context of the well known mechanosensitive
channel MscL. Additionally, we make suggestions for bridging gaps between our
current theoretical understanding and common experimental techniques.
| [
{
"created": "Tue, 13 Feb 2007 23:22:11 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Ursell",
"Tristan",
""
],
[
"Phillips",
"Rob",
""
],
[
"Kondev",
"Jane'",
""
],
[
"Reeves",
"Dan",
""
],
[
"Wiggins",
"Paul A.",
""
]
] | Mechanosensation is a key part of the sensory repertoire of a vast array of different cells and organisms. The molecular dissection of the origins of mechanosensation is rapidly advancing as a result of both structural and functional studies. One intriguing mode of mechanosensation results from tension in the membrane of the cell (or vesicle) of interest. The aim of this review is to catalogue recent work that uses a mix of continuum and statistical mechanics to explore the role of the lipid bilayer in the function of mechanosensitive channels that respond to membrane tension. The role of bilayer deformation will be explored in the context of the well known mechanosensitive channel MscL. Additionally, we make suggestions for bridging gaps between our current theoretical understanding and common experimental techniques. |
0805.1640 | Luciano da Fontoura Costa | Sebastian Ahnert and Luciano da Fontoura Costa | Connectivity and Dynamics of Neuronal Networks as Defined by the Shape
of Individual Neurons | 11 pages, 9 figures. A working manuscript | null | 10.1088/1367-2630/11/10/103053 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neuronal networks constitute a special class of dynamical systems, as they
are formed by individual geometrical components, namely the neurons. In the
existing literature, relatively little attention has been given to the
influence of neuron shape on the overall connectivity and dynamics of the
emerging networks. The current work addresses this issue by considering
simplified neuronal shapes consisting of circular regions (soma/axons) with
spokes (dendrites). Networks are grown by placing these patterns randomly in
the 2D plane and establishing connections whenever a piece of dendrite falls
inside an axon. Several topological and dynamical properties of the resulting
graph are measured, including the degree distribution, clustering coefficients,
symmetry of connections, size of the largest connected component, as well as
three hierarchical measurements of the local topology. By varying the number of
processes of the individual basic patterns, we can quantify relationships
between the individual neuronal shape and the topological and dynamical
features of the networks. Integrate-and-fire dynamics on these networks is also
investigated with respect to transient activation from a source node,
indicating that long-range connections play an important role in the
propagation of avalanches.
| [
{
"created": "Mon, 12 May 2008 14:02:45 GMT",
"version": "v1"
}
] | 2015-05-13 | [
[
"Ahnert",
"Sebastian",
""
],
[
"Costa",
"Luciano da Fontoura",
""
]
] | Neuronal networks constitute a special class of dynamical systems, as they are formed by individual geometrical components, namely the neurons. In the existing literature, relatively little attention has been given to the influence of neuron shape on the overall connectivity and dynamics of the emerging networks. The current work addresses this issue by considering simplified neuronal shapes consisting of circular regions (soma/axons) with spokes (dendrites). Networks are grown by placing these patterns randomly in the 2D plane and establishing connections whenever a piece of dendrite falls inside an axon. Several topological and dynamical properties of the resulting graph are measured, including the degree distribution, clustering coefficients, symmetry of connections, size of the largest connected component, as well as three hierarchical measurements of the local topology. By varying the number of processes of the individual basic patterns, we can quantify relationships between the individual neuronal shape and the topological and dynamical features of the networks. Integrate-and-fire dynamics on these networks is also investigated with respect to transient activation from a source node, indicating that long-range connections play an important role in the propagation of avalanches. |
2102.08992 | Aubain Nzokem PhD | A. H. Nzokem | SIS Epidemic Model: Birth-and-Death Markov Chain Approach | 14 pages, 5 figures | International Journal of Statistics and Probability; Vol. 10, No.
4; July 2021 | 10.5539/ijsp.v10n4p10 | null | q-bio.PE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We are interested in describing the infected size of the SIS Epidemic model
using Birth-Death Markov process. The Susceptible-Infected-Susceptible (SIS)
model is defined within a population of constant size $M$; the size is kept
constant by replacing each death with a newborn healthy individual. The life
span of each individual in the population is modelled by an exponential
distribution with parameter $\alpha$; and the disease spreads within the
population is modelled by a Poisson process with a rate $\lambda_{I}$.
$\lambda_{I}=\beta I(1-\frac{I}{M}) $ is similar to the instantaneous rate in
the logistic population growth model. The analysis is focused on the disease
outbreak, where the reproduction number $R=\frac{\beta}{\alpha} $ is greater
than one. As methodology, we use both numerical and analytical approaches. The
analysis relies on the stationary distribution for Birth and Death Markov
process. The numerical approach creates sample path simulations into order show
the infected size dynamics, and the relationship between infected size and $R$.
As $M$ becomes large, some stable statistical characteristics of the infected
size distribution can be deduced. And the infected size is shown analytically
to follow a normal distribution with mean $(1-\frac{1}{R}) M$ and Variance
$\frac{M}{R} $.
| [
{
"created": "Wed, 17 Feb 2021 19:37:30 GMT",
"version": "v1"
}
] | 2021-06-01 | [
[
"Nzokem",
"A. H.",
""
]
] | We are interested in describing the infected size of the SIS Epidemic model using Birth-Death Markov process. The Susceptible-Infected-Susceptible (SIS) model is defined within a population of constant size $M$; the size is kept constant by replacing each death with a newborn healthy individual. The life span of each individual in the population is modelled by an exponential distribution with parameter $\alpha$; and the disease spreads within the population is modelled by a Poisson process with a rate $\lambda_{I}$. $\lambda_{I}=\beta I(1-\frac{I}{M}) $ is similar to the instantaneous rate in the logistic population growth model. The analysis is focused on the disease outbreak, where the reproduction number $R=\frac{\beta}{\alpha} $ is greater than one. As methodology, we use both numerical and analytical approaches. The analysis relies on the stationary distribution for Birth and Death Markov process. The numerical approach creates sample path simulations into order show the infected size dynamics, and the relationship between infected size and $R$. As $M$ becomes large, some stable statistical characteristics of the infected size distribution can be deduced. And the infected size is shown analytically to follow a normal distribution with mean $(1-\frac{1}{R}) M$ and Variance $\frac{M}{R} $. |
2407.11720 | Steven Kelk | M. Frohn, N. Holtgrefe, L. van Iersel, M. Jones, and S. Kelk | Invariants for level-1 phylogenetic networks under the random walk
4-state Markov model | Submitted to journal. 24 pages | null | null | null | q-bio.PE math.AG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phylogenetic networks can represent evolutionary events that cannot be
described by phylogenetic trees, such as hybridization, introgression, and
lateral gene transfer. Studying phylogenetic networks under a statistical model
of DNA sequence evolution can aid the inference of phylogenetic networks. Most
notably Markov models like the Jukes-Cantor or Kimura-3 model can been employed
to infer a phylogenetic network using phylogenetic invariants. In this article
we determine all quadratic invariants for sunlet networks under the random walk
4-state Markov model, which includes the aforementioned models. Taking toric
fiber products of trees and sunlet networks, we obtain a new class of
invariants for level-1 phylogenetic networks under the same model. Furthermore,
we apply our results to the identifiability problem of a network parameter. In
particular, we prove that our new class of invariants of the studied model is
not sufficient to derive identifiability of quarnets (4-leaf networks).
Moreover, we provide an efficient method that is faster and more reliable than
the state-of-the-art in finding a significant number of invariants for many
level-1 phylogenetic networks.
| [
{
"created": "Tue, 16 Jul 2024 13:39:22 GMT",
"version": "v1"
}
] | 2024-07-17 | [
[
"Frohn",
"M.",
""
],
[
"Holtgrefe",
"N.",
""
],
[
"van Iersel",
"L.",
""
],
[
"Jones",
"M.",
""
],
[
"Kelk",
"S.",
""
]
] | Phylogenetic networks can represent evolutionary events that cannot be described by phylogenetic trees, such as hybridization, introgression, and lateral gene transfer. Studying phylogenetic networks under a statistical model of DNA sequence evolution can aid the inference of phylogenetic networks. Most notably Markov models like the Jukes-Cantor or Kimura-3 model can been employed to infer a phylogenetic network using phylogenetic invariants. In this article we determine all quadratic invariants for sunlet networks under the random walk 4-state Markov model, which includes the aforementioned models. Taking toric fiber products of trees and sunlet networks, we obtain a new class of invariants for level-1 phylogenetic networks under the same model. Furthermore, we apply our results to the identifiability problem of a network parameter. In particular, we prove that our new class of invariants of the studied model is not sufficient to derive identifiability of quarnets (4-leaf networks). Moreover, we provide an efficient method that is faster and more reliable than the state-of-the-art in finding a significant number of invariants for many level-1 phylogenetic networks. |
2111.08742 | Francisco J. Cao Garcia | Rodrigo Crespo-Miguel, Javier Jarillo, Francisco J. Cao-Garc\'ia | Scaling of population resilience with dispersal length and habitat size | 24 pages | null | 10.1088/1742-5468/ac4982 | null | q-bio.PE cond-mat.stat-mech | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Environmental fluctuations can create population-depleted areas and even
extinct areas for the population. This effect is more severe in the presence of
the Allee effect (decreasing growth rate at low population densities).
Dispersal inside the habitat provides a rescue effect on population-depleted
areas, enhancing the population resilience to environmental fluctuations.
Habitat reduction decreases the effectiveness of the dispersal rescue
mechanism. We report here how the population resilience to environmental
fluctuations decreases when the dispersal length or the habitat size are
reduced. The resilience reduction is characterized by a decrease of the
extinction threshold for environmental fluctuations. The extinction threshold
is shown to scale with the ratio between the dispersal length and the scale of
environmental synchrony, i.e., it is the dispersal connection between
non-environmentally-correlated regions that provides resilience to
environmental fluctuations. Habitat reduction also decreases the resilience to
environmental fluctuations, when the habitat size is similar to or smaller than
the characteristic dispersal distances. The power laws of these scaling
behaviors are characterized here. Alternative scaling functions with spatial
scales of population synchrony are found to fit the simulations worse. These
results support the dispersal length as the critical scale for extinction
induced by habitat reduction.
| [
{
"created": "Tue, 16 Nov 2021 19:29:14 GMT",
"version": "v1"
}
] | 2022-02-23 | [
[
"Crespo-Miguel",
"Rodrigo",
""
],
[
"Jarillo",
"Javier",
""
],
[
"Cao-García",
"Francisco J.",
""
]
] | Environmental fluctuations can create population-depleted areas and even extinct areas for the population. This effect is more severe in the presence of the Allee effect (decreasing growth rate at low population densities). Dispersal inside the habitat provides a rescue effect on population-depleted areas, enhancing the population resilience to environmental fluctuations. Habitat reduction decreases the effectiveness of the dispersal rescue mechanism. We report here how the population resilience to environmental fluctuations decreases when the dispersal length or the habitat size are reduced. The resilience reduction is characterized by a decrease of the extinction threshold for environmental fluctuations. The extinction threshold is shown to scale with the ratio between the dispersal length and the scale of environmental synchrony, i.e., it is the dispersal connection between non-environmentally-correlated regions that provides resilience to environmental fluctuations. Habitat reduction also decreases the resilience to environmental fluctuations, when the habitat size is similar to or smaller than the characteristic dispersal distances. The power laws of these scaling behaviors are characterized here. Alternative scaling functions with spatial scales of population synchrony are found to fit the simulations worse. These results support the dispersal length as the critical scale for extinction induced by habitat reduction. |
1306.5048 | Ben Nolting | Ben C. Nolting, Travis M. Hinkelman, Chad E. Brassil, Brigitte
Tenhumberg | Composite random search strategies based on non-directional sensory cues | null | null | 10.1016/j.ecocom.2015.03.002 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many foraging animals find food using composite random search strategies,
which consist of intensive and extensive search modes. Models of composite
search can generate predictions about how optimal foragers should behave in
each search mode, and how they should determine when to switch between search
modes. Most of these models assume that foragers use resource encounters to
decide when to switch between search modes. Empirical observations indicate
that a variety of organisms use non-directional sensory cues to identify areas
that warrant intensive search. These cues are not precise enough to allow a
forager to directly orient itself to a resource, but can be used as a criterion
to determine the appropriate search mode. As a potential example, a forager
might use olfactory information, which could help it determine if an area is
worth searching carefully. We developed a model of composite search based on
non-directional sensory cues. With simulations, we compared the search
efficiencies of composite foragers that use resource encounters as their
mode-switching criterion with those that use non-directional sensory cues. We
identified optimal search patterns and mode-switching criteria on a variety of
resource distributions, characterized by different levels of resource
aggregation and density. On all resource distributions, foraging strategies
based on the non-directional sensory criterion were more efficient than those
based on the resource encounter criterion. Strategies based on the
non-directional sensory criterion were also more robust to changes in resource
distribution. Our results suggest that current assumptions about the role of
resource encounters in models of optimal composite search should be
re-examined. The search strategies predicted by our model can help bridge the
gap between random search theory and traditional patch-use foraging theory.
| [
{
"created": "Fri, 21 Jun 2013 05:21:41 GMT",
"version": "v1"
},
{
"created": "Fri, 27 Feb 2015 21:16:16 GMT",
"version": "v2"
}
] | 2017-09-26 | [
[
"Nolting",
"Ben C.",
""
],
[
"Hinkelman",
"Travis M.",
""
],
[
"Brassil",
"Chad E.",
""
],
[
"Tenhumberg",
"Brigitte",
""
]
] | Many foraging animals find food using composite random search strategies, which consist of intensive and extensive search modes. Models of composite search can generate predictions about how optimal foragers should behave in each search mode, and how they should determine when to switch between search modes. Most of these models assume that foragers use resource encounters to decide when to switch between search modes. Empirical observations indicate that a variety of organisms use non-directional sensory cues to identify areas that warrant intensive search. These cues are not precise enough to allow a forager to directly orient itself to a resource, but can be used as a criterion to determine the appropriate search mode. As a potential example, a forager might use olfactory information, which could help it determine if an area is worth searching carefully. We developed a model of composite search based on non-directional sensory cues. With simulations, we compared the search efficiencies of composite foragers that use resource encounters as their mode-switching criterion with those that use non-directional sensory cues. We identified optimal search patterns and mode-switching criteria on a variety of resource distributions, characterized by different levels of resource aggregation and density. On all resource distributions, foraging strategies based on the non-directional sensory criterion were more efficient than those based on the resource encounter criterion. Strategies based on the non-directional sensory criterion were also more robust to changes in resource distribution. Our results suggest that current assumptions about the role of resource encounters in models of optimal composite search should be re-examined. The search strategies predicted by our model can help bridge the gap between random search theory and traditional patch-use foraging theory. |
1806.10161 | Yarden Katz | Yarden Katz, Michael Springer, Walter Fontana | Embodying probabilistic inference in biochemical circuits | 11 figures | null | null | null | q-bio.MN q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Probabilistic inference provides a language for describing how organisms may
learn from and adapt to their environment. The computations needed to implement
probabilistic inference often require specific representations, akin to having
the suitable data structures for implementing certain algorithms in computer
programming. Yet it is unclear how such representations can be instantiated in
the stochastic, parallel-running biochemical machinery found in cells (such as
single-celled organisms). Here, we show how representations for supporting
inference in Markov models can be embodied in cellular circuits, by combining a
concentration-dependent scheme for encoding probabilities with a mechanism for
directional counting. We show how the logic of protein production and
degradation constrains the computation we set out to implement. We argue that
this process by which an abstract computation is shaped by its biochemical
realization strikes a compromise between "rationalistic" information-processing
perspectives and alternative approaches that emphasize embodiment.
| [
{
"created": "Tue, 26 Jun 2018 18:28:30 GMT",
"version": "v1"
}
] | 2018-06-28 | [
[
"Katz",
"Yarden",
""
],
[
"Springer",
"Michael",
""
],
[
"Fontana",
"Walter",
""
]
] | Probabilistic inference provides a language for describing how organisms may learn from and adapt to their environment. The computations needed to implement probabilistic inference often require specific representations, akin to having the suitable data structures for implementing certain algorithms in computer programming. Yet it is unclear how such representations can be instantiated in the stochastic, parallel-running biochemical machinery found in cells (such as single-celled organisms). Here, we show how representations for supporting inference in Markov models can be embodied in cellular circuits, by combining a concentration-dependent scheme for encoding probabilities with a mechanism for directional counting. We show how the logic of protein production and degradation constrains the computation we set out to implement. We argue that this process by which an abstract computation is shaped by its biochemical realization strikes a compromise between "rationalistic" information-processing perspectives and alternative approaches that emphasize embodiment. |
1305.2249 | David Basanta | David Basanta and Alexander R. A. Anderson | Exploiting ecological principles to better understand cancer progression
and treatment | 6 figures, 1 example | null | null | null | q-bio.TO q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A small but growing number of people are finding interesting parallels
between ecosystems as studied by ecologists (think of a Savanna or the Amazon
rain forest or a Coral reef) and tumours1-3. The idea of viewing cancer from an
ecological perspective has many implications but fundamentally, it means that
we should not see cancer just as a group of mutated cells. A more useful
definition of cancer is to consider it a disruption in the complex balance of
many interacting cellular and microenvironmental elements in a specific organ.
This perspective means that organs undergoing carcinogenesis should be seen as
sophisticated ecosystems in homeostasis that cancer cells can disrupt. It also
makes cancer seem even more complex but may ultimately provides isights that
make it more treatable. Here we discuss how ecological principles can be used
to better understand cancer progression and treatment, using several
mathematical and computational models to illustrate our argument.
| [
{
"created": "Fri, 10 May 2013 02:36:26 GMT",
"version": "v1"
}
] | 2013-05-13 | [
[
"Basanta",
"David",
""
],
[
"Anderson",
"Alexander R. A.",
""
]
] | A small but growing number of people are finding interesting parallels between ecosystems as studied by ecologists (think of a Savanna or the Amazon rain forest or a Coral reef) and tumours1-3. The idea of viewing cancer from an ecological perspective has many implications but fundamentally, it means that we should not see cancer just as a group of mutated cells. A more useful definition of cancer is to consider it a disruption in the complex balance of many interacting cellular and microenvironmental elements in a specific organ. This perspective means that organs undergoing carcinogenesis should be seen as sophisticated ecosystems in homeostasis that cancer cells can disrupt. It also makes cancer seem even more complex but may ultimately provides isights that make it more treatable. Here we discuss how ecological principles can be used to better understand cancer progression and treatment, using several mathematical and computational models to illustrate our argument. |
2406.00873 | Pedro Ballester | Qianrong Guo, Saiveth Hernandez-Hernandez, and Pedro J Ballester | Scaffold Splits Overestimate Virtual Screening Performance | null | null | null | null | q-bio.QM cs.AI cs.CE cs.LG q-bio.BM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Virtual Screening (VS) of vast compound libraries guided by Artificial
Intelligence (AI) models is a highly productive approach to early drug
discovery. Data splitting is crucial for better benchmarking of such AI models.
Traditional random data splits produce similar molecules between training and
test sets, conflicting with the reality of VS libraries which mostly contain
structurally distinct compounds. Scaffold split, grouping molecules by shared
core structure, is widely considered to reflect this real-world scenario.
However, here we show that the scaffold split also overestimates VS
performance. The reason is that molecules with different chemical scaffolds are
often similar, which hence introduces unrealistically high similarities between
training molecules and test molecules following a scaffold split. Our study
examined three representative AI models on 60 NCI-60 datasets, each with
approximately 30,000 to 50,000 molecules tested on a different cancer cell
line. Each dataset was split with three methods: scaffold, Butina clustering
and the more accurate Uniform Manifold Approximation and Projection (UMAP)
clustering. Regardless of the model, model performance is much worse with UMAP
splits from the results of the 2100 models trained and evaluated for each
algorithm and split. These robust results demonstrate the need for more
realistic data splits to tune, compare, and select models for VS. For the same
reason, avoiding the scaffold split is also recommended for other molecular
property prediction problems. The code to reproduce these results is available
at https://github.com/ScaffoldSplitsOverestimateVS
| [
{
"created": "Sun, 2 Jun 2024 21:40:13 GMT",
"version": "v1"
},
{
"created": "Sun, 30 Jun 2024 12:12:23 GMT",
"version": "v2"
}
] | 2024-07-02 | [
[
"Guo",
"Qianrong",
""
],
[
"Hernandez-Hernandez",
"Saiveth",
""
],
[
"Ballester",
"Pedro J",
""
]
] | Virtual Screening (VS) of vast compound libraries guided by Artificial Intelligence (AI) models is a highly productive approach to early drug discovery. Data splitting is crucial for better benchmarking of such AI models. Traditional random data splits produce similar molecules between training and test sets, conflicting with the reality of VS libraries which mostly contain structurally distinct compounds. Scaffold split, grouping molecules by shared core structure, is widely considered to reflect this real-world scenario. However, here we show that the scaffold split also overestimates VS performance. The reason is that molecules with different chemical scaffolds are often similar, which hence introduces unrealistically high similarities between training molecules and test molecules following a scaffold split. Our study examined three representative AI models on 60 NCI-60 datasets, each with approximately 30,000 to 50,000 molecules tested on a different cancer cell line. Each dataset was split with three methods: scaffold, Butina clustering and the more accurate Uniform Manifold Approximation and Projection (UMAP) clustering. Regardless of the model, model performance is much worse with UMAP splits from the results of the 2100 models trained and evaluated for each algorithm and split. These robust results demonstrate the need for more realistic data splits to tune, compare, and select models for VS. For the same reason, avoiding the scaffold split is also recommended for other molecular property prediction problems. The code to reproduce these results is available at https://github.com/ScaffoldSplitsOverestimateVS |
2402.13260 | Gerardo F. Goya | Gerardo F. Goya, Vittoria Raffa | Magnetic Nanoparticles for Neural Engineering | 35 pages; 6 figures; Chapter 22 of Magnetic Nanoparticles: From
Fabrication to Clinical Applications | Thanh, Nguyen TK (ed.). Clinical applications of magnetic
nanoparticles: From Fabrication to Clinical Applications. CRC press, 2018 | 10.1201/b11760 | null | q-bio.NC physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | Magnetic nanoparticles (MNPs) are the foundation of several new strategies
for neural repair and neurological therapies. The fact that a remote force can
act on MNPs at the cytoplasmic space constitutes the essence of many new
neurotherapeutic concepts. MNPs with a predesigned physicochemical
characteristic can interact with external magnetic fields to apply mechanical
forces in definite areas of the cell to modulate cellular behaviour. Magnetic
actuation to direct the outgrowth on neurons after nerve injury has already
demonstrated the therapeutic potential for neural repair. When these magnetic
cores are functionalized with molecules such as nerve growth factors or
neuroprotective molecules, multifunctional devices can be developed. This
chapter will review some of these new nanotechnology-based solutions for
neurological diseases, specifically those based on the use of engineered MNPs
used for neuroprotection and neuroregeneration. These include the use of MNPs
as magnetic actuators to guide neural cells, modulate intracellular transport
and stimulate axonal growth after nerve injury.
| [
{
"created": "Mon, 5 Feb 2024 08:38:41 GMT",
"version": "v1"
}
] | 2024-02-22 | [
[
"Goya",
"Gerardo F.",
""
],
[
"Raffa",
"Vittoria",
""
]
] | Magnetic nanoparticles (MNPs) are the foundation of several new strategies for neural repair and neurological therapies. The fact that a remote force can act on MNPs at the cytoplasmic space constitutes the essence of many new neurotherapeutic concepts. MNPs with a predesigned physicochemical characteristic can interact with external magnetic fields to apply mechanical forces in definite areas of the cell to modulate cellular behaviour. Magnetic actuation to direct the outgrowth on neurons after nerve injury has already demonstrated the therapeutic potential for neural repair. When these magnetic cores are functionalized with molecules such as nerve growth factors or neuroprotective molecules, multifunctional devices can be developed. This chapter will review some of these new nanotechnology-based solutions for neurological diseases, specifically those based on the use of engineered MNPs used for neuroprotection and neuroregeneration. These include the use of MNPs as magnetic actuators to guide neural cells, modulate intracellular transport and stimulate axonal growth after nerve injury. |
1011.2079 | Jose A. Cuesta | Jose A. Cuesta | Huge progeny production during the transient of a quasi-species model of
viral infection, reproduction and mutation | 9 pages, no figures, uses Elsevier elsarticle class | Mathematical and Computer Modelling 54, 1676-1681 (2011) | 10.1016/j.mcm.2010.11.055 | null | q-bio.PE physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Eigen's quasi-species model describes viruses as ensembles of different
mutants of a high fitness "master" genotype. Mutants are assumed to have lower
fitness than the master type, yet they coexist with it forming the
quasi-species. When the mutation rate is sufficiently high, the master type no
longer survives and gets replaced by a wide range of mutant types, thus
destroying the quasi-species. It is the so-called "error catastrophe". But
natural selection acts on phenotypes, not genotypes, and huge amounts of
genotypes yield the same phenotype. An important consequence of this is the
appearance of beneficial mutations which increase the fitness of mutants. A
model has been recently proposed to describe quasi-species in the presence of
beneficial mutations. This model lacks the error catastrophe of Eigen's model
and predicts a steady state in which the viral population grows exponentially.
Extinction can only occur if the infectivity of the quasi-species is so low
that this exponential is negative. In this work I investigate the transient of
this model when infection is started from a small amount of low fitness
virions. I prove that, beyond an initial regime where viral population
decreases (and can go extinct), the growth of the population is
super-exponential. Hence this population quickly becomes so huge that selection
due to lack of host cells to be infected begins to act before the steady state
is reached. This result suggests that viral infection may widespread before the
virus has developed its optimal form.
| [
{
"created": "Tue, 9 Nov 2010 13:37:24 GMT",
"version": "v1"
}
] | 2012-02-02 | [
[
"Cuesta",
"Jose A.",
""
]
] | Eigen's quasi-species model describes viruses as ensembles of different mutants of a high fitness "master" genotype. Mutants are assumed to have lower fitness than the master type, yet they coexist with it forming the quasi-species. When the mutation rate is sufficiently high, the master type no longer survives and gets replaced by a wide range of mutant types, thus destroying the quasi-species. It is the so-called "error catastrophe". But natural selection acts on phenotypes, not genotypes, and huge amounts of genotypes yield the same phenotype. An important consequence of this is the appearance of beneficial mutations which increase the fitness of mutants. A model has been recently proposed to describe quasi-species in the presence of beneficial mutations. This model lacks the error catastrophe of Eigen's model and predicts a steady state in which the viral population grows exponentially. Extinction can only occur if the infectivity of the quasi-species is so low that this exponential is negative. In this work I investigate the transient of this model when infection is started from a small amount of low fitness virions. I prove that, beyond an initial regime where viral population decreases (and can go extinct), the growth of the population is super-exponential. Hence this population quickly becomes so huge that selection due to lack of host cells to be infected begins to act before the steady state is reached. This result suggests that viral infection may widespread before the virus has developed its optimal form. |
q-bio/0506034 | Francis Edward Su | Claus-Jochen Haake, Akemi Kashiwada, Francis Edward Su | The Shapley Value of Phylogenetic Trees | References added, and a section (calculating the Shapley value of a
tree game from its subtrees) was removed for length reasons (request of
referee) and may appear in another paper. 16 pages; related work at
http://www.math.hmc.edu/~su/papers.html. Journal of Mathematical Biology, to
appear. The original article is available at http://www.springerlink.com | J. Mathematical Biology 56 (2008), 479--497 | 10.1007/s00285-007-0126-2 | null | q-bio.QM cs.GT math.CO q-bio.PE | null | Every weighted tree corresponds naturally to a cooperative game that we call
a "tree game"; it assigns to each subset of leaves the sum of the weights of
the minimal subtree spanned by those leaves. In the context of phylogenetic
trees, the leaves are species and this assignment captures the diversity
present in the coalition of species considered. We consider the Shapley value
of tree games and suggest a biological interpretation. We determine the linear
transformation M that shows the dependence of the Shapley value on the edge
weights of the tree, and we also compute a null space basis of M. Both depend
on the "split counts" of the tree. Finally, we characterize the Shapley value
on tree games by four axioms, a counterpart to Shapley's original theorem on
the larger class of cooperative games.
| [
{
"created": "Wed, 22 Jun 2005 15:47:33 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Aug 2007 00:34:31 GMT",
"version": "v2"
}
] | 2009-09-02 | [
[
"Haake",
"Claus-Jochen",
""
],
[
"Kashiwada",
"Akemi",
""
],
[
"Su",
"Francis Edward",
""
]
] | Every weighted tree corresponds naturally to a cooperative game that we call a "tree game"; it assigns to each subset of leaves the sum of the weights of the minimal subtree spanned by those leaves. In the context of phylogenetic trees, the leaves are species and this assignment captures the diversity present in the coalition of species considered. We consider the Shapley value of tree games and suggest a biological interpretation. We determine the linear transformation M that shows the dependence of the Shapley value on the edge weights of the tree, and we also compute a null space basis of M. Both depend on the "split counts" of the tree. Finally, we characterize the Shapley value on tree games by four axioms, a counterpart to Shapley's original theorem on the larger class of cooperative games. |
1510.01195 | Richard A Neher | Richard A. Neher, Trevor Bedford, Rodney S. Daniels, Colin A. Russell,
and Boris I. Shraiman | Prediction, dynamics, and visualization of antigenic phenotypes of
seasonal influenza viruses | visualization available at http://HI.nextflu.org | null | 10.1073/pnas.1525578113 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human seasonal influenza viruses evolve rapidly, enabling the virus
population to evade immunity and re-infect previously infected individuals.
Antigenic properties are largely determined by the surface glycoprotein
hemagglutinin (HA) and amino acid substitutions at exposed epitope sites in HA
mediate loss of recognition by antibodies. Here, we show that antigenic
differences measured through serological assay data are well described by a sum
of antigenic changes along the path connecting viruses in a phylogenetic tree.
This mapping onto the tree allows prediction of antigenicity from HA sequence
data alone. The mapping can further be used to make predictions about the
makeup of the future seasonal influenza virus population, and we compare
predictions between models with serological and sequence data. To make timely
model output readily available, we developed a web browser based application
that visualizes antigenic data on a continuously updated phylogeny.
| [
{
"created": "Mon, 5 Oct 2015 15:50:28 GMT",
"version": "v1"
}
] | 2016-04-27 | [
[
"Neher",
"Richard A.",
""
],
[
"Bedford",
"Trevor",
""
],
[
"Daniels",
"Rodney S.",
""
],
[
"Russell",
"Colin A.",
""
],
[
"Shraiman",
"Boris I.",
""
]
] | Human seasonal influenza viruses evolve rapidly, enabling the virus population to evade immunity and re-infect previously infected individuals. Antigenic properties are largely determined by the surface glycoprotein hemagglutinin (HA) and amino acid substitutions at exposed epitope sites in HA mediate loss of recognition by antibodies. Here, we show that antigenic differences measured through serological assay data are well described by a sum of antigenic changes along the path connecting viruses in a phylogenetic tree. This mapping onto the tree allows prediction of antigenicity from HA sequence data alone. The mapping can further be used to make predictions about the makeup of the future seasonal influenza virus population, and we compare predictions between models with serological and sequence data. To make timely model output readily available, we developed a web browser based application that visualizes antigenic data on a continuously updated phylogeny. |
q-bio/0309015 | Michael Desai | Michael M. Desai, David R. Nelson | A Quasispecies on a Moving Oasis | 12 pages, 5 figures | null | null | null | q-bio.PE cond-mat.soft | null | A population evolving in an inhomogeneous environment will adapt differently
to different regions. We study the conditions under which such a population can
maintain adaptations to a particular region when that region is not stationary,
but can move. In particular, we study a quasispecies living near a favorable
patch ("oasis") in the middle of a large "desert." The population has two
genetic states, one of which which conveys a relative advantage while in the
oasis at the cost of a disadvantage in the desert. We consider the population
dynamics when the oasis is moving, or equivalently some form of "wind" is
blowing the population away from the oasis. We find that the ratio of the two
types of individuals exhibits sharp transitions at particular oasis velocities.
We calculate an extinction velocity, and a switching velocity above which the
dominance switches from the oasis-adapted genotype to the desert-adapted one.
This switching velocity is analagous to the quasispecies mutational error
threshold. Above this velocity, the population cannot maintain adaptations to
the properties of the oasis.
| [
{
"created": "Fri, 26 Sep 2003 17:23:31 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Desai",
"Michael M.",
""
],
[
"Nelson",
"David R.",
""
]
] | A population evolving in an inhomogeneous environment will adapt differently to different regions. We study the conditions under which such a population can maintain adaptations to a particular region when that region is not stationary, but can move. In particular, we study a quasispecies living near a favorable patch ("oasis") in the middle of a large "desert." The population has two genetic states, one of which which conveys a relative advantage while in the oasis at the cost of a disadvantage in the desert. We consider the population dynamics when the oasis is moving, or equivalently some form of "wind" is blowing the population away from the oasis. We find that the ratio of the two types of individuals exhibits sharp transitions at particular oasis velocities. We calculate an extinction velocity, and a switching velocity above which the dominance switches from the oasis-adapted genotype to the desert-adapted one. This switching velocity is analagous to the quasispecies mutational error threshold. Above this velocity, the population cannot maintain adaptations to the properties of the oasis. |
2205.07258 | Shaoli Wang | Shaoli Wang, Tengfei Wang, Ya-nen Qi, Fei Xu | Backward bifurcation, basic reinfection number and robustness of a SEIRE
epidemic model with reinfection | 19 pages, 7 figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-sa/4.0/ | Recent evidences show that individuals who recovered from COVID-19 can be
reinfected. However, this phenomenon has rarely been studied using mathematical
models. In this paper, we propose a SEIRE epidemic model to describe the spread
of the epidemic with reinfection. We obtain the important thresholds $R_0$ (the
basic reproduction number) and Rc (a threshold less than one). Our
investigations show that when $R_0 > 1$, the system has an endemic equilibrium,
which is globally asymptotically stable. When $R_c < R_0 < 1$, the epidemic
system exhibits bistable dynamics. That is, the system has backward bifurcation
and the disease cannot be eradicated. In order to eradicate the disease, we
must ensure that the basic reproduction number $R_0$ is less than $R_c$. The
basic reinfection number is obtained to measure the reinfection force, which
turns out to be a new tipping point for disease dynamics. We also give
definition of robustness, a new concept to measure the difficulty of completely
eliminating the disease for a bistable epidemic system. Numerical simulations
are carried out to verify the conclusions.
| [
{
"created": "Sun, 15 May 2022 10:56:00 GMT",
"version": "v1"
}
] | 2022-05-17 | [
[
"Wang",
"Shaoli",
""
],
[
"Wang",
"Tengfei",
""
],
[
"Qi",
"Ya-nen",
""
],
[
"Xu",
"Fei",
""
]
] | Recent evidences show that individuals who recovered from COVID-19 can be reinfected. However, this phenomenon has rarely been studied using mathematical models. In this paper, we propose a SEIRE epidemic model to describe the spread of the epidemic with reinfection. We obtain the important thresholds $R_0$ (the basic reproduction number) and Rc (a threshold less than one). Our investigations show that when $R_0 > 1$, the system has an endemic equilibrium, which is globally asymptotically stable. When $R_c < R_0 < 1$, the epidemic system exhibits bistable dynamics. That is, the system has backward bifurcation and the disease cannot be eradicated. In order to eradicate the disease, we must ensure that the basic reproduction number $R_0$ is less than $R_c$. The basic reinfection number is obtained to measure the reinfection force, which turns out to be a new tipping point for disease dynamics. We also give definition of robustness, a new concept to measure the difficulty of completely eliminating the disease for a bistable epidemic system. Numerical simulations are carried out to verify the conclusions. |
2208.11963 | Mantegazza Massimo | Massimo Mantegazza (IPMC), Vania Broccoli | SCN 1A /Na V 1.1 channelopathies: Mechanisms in expression systems,
animal models, and human iPSC models | null | Epilepsia, Wiley, 2019, 60 (S3) | 10.1111/epi.14700 | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pathogenic SCN1A/NaV1.1 mutations cause well defined epilepsies, including
Genetic Epilepsy with Febrile Seizures Plus (GEFS+) and the severe epileptic
encephalopathy Dravet syndrome. In addition, they cause a severe form of
migraine with aura, Familial Hemiplegic Migraine. Moreover, SCN1A/NaV1.1
variants have been inferred as risk factors in other types of epilepsy. We
review here the advancements obtained studying pathological mechanisms of
SCN1A/NaV1.1 mutations with experimental systems. We present results gained
with in vitro expression systems, gene targeted animal models and the iPSC
technology, highlighting advantages, limits and pitfalls for each of these
systems.
Overall, the results obtained in the last two decades confirm that the
initial pathological mechanism of epileptogenic SCN1A/NaV1.1 mutations is
loss-of-function of NaV1.1 leading to hypoexcitability of at least some types
of GABAergic neurons (including cortical and hippocampal parvalbumin- and
somatostatin-positive ones). Conversely, more limited results point to NaV1.1
gain-of-function for FHM mutations. Behind these relatively simple pathological
mechanisms, an unexpected complexity has been observed, in part generated by
technical issues in experimental studies and in part related to intrinsically
complex pathophysiological responses and remodeling, which yet remain to be
fully disentangled.
| [
{
"created": "Thu, 25 Aug 2022 09:40:54 GMT",
"version": "v1"
}
] | 2022-08-26 | [
[
"Mantegazza",
"Massimo",
"",
"IPMC"
],
[
"Broccoli",
"Vania",
""
]
] | Pathogenic SCN1A/NaV1.1 mutations cause well defined epilepsies, including Genetic Epilepsy with Febrile Seizures Plus (GEFS+) and the severe epileptic encephalopathy Dravet syndrome. In addition, they cause a severe form of migraine with aura, Familial Hemiplegic Migraine. Moreover, SCN1A/NaV1.1 variants have been inferred as risk factors in other types of epilepsy. We review here the advancements obtained studying pathological mechanisms of SCN1A/NaV1.1 mutations with experimental systems. We present results gained with in vitro expression systems, gene targeted animal models and the iPSC technology, highlighting advantages, limits and pitfalls for each of these systems. Overall, the results obtained in the last two decades confirm that the initial pathological mechanism of epileptogenic SCN1A/NaV1.1 mutations is loss-of-function of NaV1.1 leading to hypoexcitability of at least some types of GABAergic neurons (including cortical and hippocampal parvalbumin- and somatostatin-positive ones). Conversely, more limited results point to NaV1.1 gain-of-function for FHM mutations. Behind these relatively simple pathological mechanisms, an unexpected complexity has been observed, in part generated by technical issues in experimental studies and in part related to intrinsically complex pathophysiological responses and remodeling, which yet remain to be fully disentangled. |
0712.3773 | William Hlavacek | Jin Yang, Michael I. Monine, James R. Faeder and William S. Hlavacek | Kinetic Monte Carlo Method for Rule-based Modeling of Biochemical
Networks | 18 pages, 5 figures | Phys. Rev. E, 78:31910, 2008 | 10.1103/PhysRevE.78.031910 | LA-UR-07-8103 | q-bio.QM q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a kinetic Monte Carlo method for simulating chemical
transformations specified by reaction rules, which can be viewed as generators
of chemical reactions, or equivalently, definitions of reaction classes. A rule
identifies the molecular components involved in a transformation, how these
components change, conditions that affect whether a transformation occurs, and
a rate law. The computational cost of the method, unlike conventional
simulation approaches, is independent of the number of possible reactions,
which need not be specified in advance or explicitly generated in a simulation.
To demonstrate the method, we apply it to study the kinetics of multivalent
ligand-receptor interactions. We expect the method will be useful for studying
cellular signaling systems and other physical systems involving aggregation
phenomena.
| [
{
"created": "Fri, 21 Dec 2007 18:46:39 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Apr 2008 18:10:31 GMT",
"version": "v2"
},
{
"created": "Sun, 29 Jun 2008 03:49:51 GMT",
"version": "v3"
},
{
"created": "Fri, 22 Aug 2008 05:51:33 GMT",
"version": "v4"
}
] | 2010-07-09 | [
[
"Yang",
"Jin",
""
],
[
"Monine",
"Michael I.",
""
],
[
"Faeder",
"James R.",
""
],
[
"Hlavacek",
"William S.",
""
]
] | We present a kinetic Monte Carlo method for simulating chemical transformations specified by reaction rules, which can be viewed as generators of chemical reactions, or equivalently, definitions of reaction classes. A rule identifies the molecular components involved in a transformation, how these components change, conditions that affect whether a transformation occurs, and a rate law. The computational cost of the method, unlike conventional simulation approaches, is independent of the number of possible reactions, which need not be specified in advance or explicitly generated in a simulation. To demonstrate the method, we apply it to study the kinetics of multivalent ligand-receptor interactions. We expect the method will be useful for studying cellular signaling systems and other physical systems involving aggregation phenomena. |
1511.02545 | Anna McGann | Christopher N Angstmann, Bruce I Henry, Anna V McGann | A Fractional-Order Infectivity SIR Model | 16 pages, no figures | Physica A: Statistical Mechanics and its Applications, Volume 452,
15 June 2016, Pages 86-93 | 10.1016/j.physa.2016.02.029 | null | q-bio.PE math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fractional-order SIR models have become increasingly popular in the
literature in recent years, however unlike the standard SIR model, they often
lack a derivation from an underlying stochastic process. Here we derive a
fractional-order infectivity SIR model from a stochastic process that
incorporates a time-since-infection dependence on the infectivity of
individuals. The fractional derivative appears in the generalised master
equations of a continuous time random walk through SIR compartments, with a
power-law function in the infectivity. We show that this model can also be
formulated as an infection-age structured Kermack-McKendrick
integro-differential SIR model. Under the appropriate limit the fractional
infectivity model reduces to the standard ordinary differential equation SIR
model.
| [
{
"created": "Mon, 9 Nov 2015 01:05:22 GMT",
"version": "v1"
}
] | 2016-03-31 | [
[
"Angstmann",
"Christopher N",
""
],
[
"Henry",
"Bruce I",
""
],
[
"McGann",
"Anna V",
""
]
] | Fractional-order SIR models have become increasingly popular in the literature in recent years, however unlike the standard SIR model, they often lack a derivation from an underlying stochastic process. Here we derive a fractional-order infectivity SIR model from a stochastic process that incorporates a time-since-infection dependence on the infectivity of individuals. The fractional derivative appears in the generalised master equations of a continuous time random walk through SIR compartments, with a power-law function in the infectivity. We show that this model can also be formulated as an infection-age structured Kermack-McKendrick integro-differential SIR model. Under the appropriate limit the fractional infectivity model reduces to the standard ordinary differential equation SIR model. |
2310.03411 | Rolf Bader | Rolf Bader | Modeling Temporal Lobe Epilepsy during Music Large-Scale Form Perception
using the Impulse Pattern Formulation (IPF) Brain Mode | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Musical large-scale form is investigated using an Electronic Dance Music
(EDM) piece fed into a Finite-Difference Time Domain (FDTD) physical model of
the cochlear which again inputs into an Impulse-Pattern Formulation (IPF) brain
model. In previous studies, experimental EEG data showed an enhanced
correlation between brain synchronization and the musical piece's amplitude and
fractal correlation dimension in good agreement with a FitzHugh-Nagumo
oscillator model\cite{Sawicki2022}. Still, this model cannot display temporal
developments of large-scale forms. The IPF Brain model also shows a high
correlation between cochlear input and brain synchronization at the gamma band
range around 50 Hz, but also a strong negative correlation for low frequencies,
associated with musical rhythm, during time frames of low cochlear input
amplitude. Such high synchronization corresponds to temporal lobe epilepsy,
often associated with creativity or spirituality. Therefore, the IPF Brain
model suggests those conscious states to happen in times of low external input
at low frequencies where isochronous musical rhythms are present.
| [
{
"created": "Thu, 5 Oct 2023 09:29:49 GMT",
"version": "v1"
}
] | 2023-10-06 | [
[
"Bader",
"Rolf",
""
]
] | Musical large-scale form is investigated using an Electronic Dance Music (EDM) piece fed into a Finite-Difference Time Domain (FDTD) physical model of the cochlear which again inputs into an Impulse-Pattern Formulation (IPF) brain model. In previous studies, experimental EEG data showed an enhanced correlation between brain synchronization and the musical piece's amplitude and fractal correlation dimension in good agreement with a FitzHugh-Nagumo oscillator model\cite{Sawicki2022}. Still, this model cannot display temporal developments of large-scale forms. The IPF Brain model also shows a high correlation between cochlear input and brain synchronization at the gamma band range around 50 Hz, but also a strong negative correlation for low frequencies, associated with musical rhythm, during time frames of low cochlear input amplitude. Such high synchronization corresponds to temporal lobe epilepsy, often associated with creativity or spirituality. Therefore, the IPF Brain model suggests those conscious states to happen in times of low external input at low frequencies where isochronous musical rhythms are present. |
2001.07810 | Marco Ramaioli | Marco Marconati, Marco Ramaioli | The role of extensional rheology in the oral phase of swallowing: an in
vitro study | null | null | null | null | q-bio.TO cond-mat.soft | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Swallowing disorders deteriorate significantly the quality of life and can be
lifethreatening. Texture modification using shear thinning food thickeners have
proven effective in the management of dysphagia. Some studies have recently
considered the positive role of cohesiveness, but there is still an
insufficient understanding of the effect of the rheological properties of the
liquid bolus on the dynamics of bolus transport, particularly when elasticity
and extensional properties are combined with a shear thinning behaviour. This
study combines steady shear, SAOS and capillary breakage extensional rheometry
with an in vitro method to characterize the oral transport of elastic liquids.
Bolus velocity and bolus length were measured from in vitro experiments using
image analysis and related to the shear and extensional properties. A theory
describing the bolus dynamics shows that the elastic and extensional properties
do not influence significantly the oral transit dynamics. Conversely, in vitro
results suggest that the extensional properties can affect the transition from
the oral to the pharyngeal phase of swallowing, where thin, viscoelastic
liquids lead to a fast transit, lower oral post-swallow residues and more
compact bolus with a smoother surface, which may suggest a lower risk of
fragmentation. This mechanistic explanation suggests that the benefit of the
extensional properties of thin viscoelastic liquids in the management of
dysphagia should be further evaluated in clinical trials.
| [
{
"created": "Tue, 21 Jan 2020 23:29:13 GMT",
"version": "v1"
}
] | 2020-01-23 | [
[
"Marconati",
"Marco",
""
],
[
"Ramaioli",
"Marco",
""
]
] | Swallowing disorders deteriorate significantly the quality of life and can be lifethreatening. Texture modification using shear thinning food thickeners have proven effective in the management of dysphagia. Some studies have recently considered the positive role of cohesiveness, but there is still an insufficient understanding of the effect of the rheological properties of the liquid bolus on the dynamics of bolus transport, particularly when elasticity and extensional properties are combined with a shear thinning behaviour. This study combines steady shear, SAOS and capillary breakage extensional rheometry with an in vitro method to characterize the oral transport of elastic liquids. Bolus velocity and bolus length were measured from in vitro experiments using image analysis and related to the shear and extensional properties. A theory describing the bolus dynamics shows that the elastic and extensional properties do not influence significantly the oral transit dynamics. Conversely, in vitro results suggest that the extensional properties can affect the transition from the oral to the pharyngeal phase of swallowing, where thin, viscoelastic liquids lead to a fast transit, lower oral post-swallow residues and more compact bolus with a smoother surface, which may suggest a lower risk of fragmentation. This mechanistic explanation suggests that the benefit of the extensional properties of thin viscoelastic liquids in the management of dysphagia should be further evaluated in clinical trials. |
2311.13874 | Giovanni Pezzulo | Francesco Mannella and Giovanni Pezzulo | Transitive inference as probabilistic preference learning | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Transitive Inference (TI) is a cognitive task that assesses an organism's
ability to infer novel relations between items based on previously acquired
knowledge. TI is known for exhibiting various behavioral and neural signatures,
such as the Serial Position Effect (SPE), Symbolic Distance Effect (SDE), and
the brain's capacity to maintain and merge separate ranking models. We propose
a novel framework that casts TI as a probabilistic preference learning task,
using one-parameter Mallows models. We present a series of simulations that
highlight the effectiveness of our novel approach. We show that the Mallows
ranking model natively reproduces SDE and SPE. Furthermore, extending the model
using Bayesian selection showcases its capacity to generate and merge ranking
hypotheses as pairs with connecting symbols are encountered. Finally, we employ
neural networks to replicate Mallows models, demonstrating how this framework
aligns with observed prefrontal neural activity during TI. Our innovative
approach sheds new light on the nature of TI, emphasizing the potential of
probabilistic preference learning for unraveling its underlying neural
mechanisms.
| [
{
"created": "Thu, 23 Nov 2023 09:40:56 GMT",
"version": "v1"
},
{
"created": "Sat, 6 Jul 2024 08:45:21 GMT",
"version": "v2"
}
] | 2024-07-09 | [
[
"Mannella",
"Francesco",
""
],
[
"Pezzulo",
"Giovanni",
""
]
] | Transitive Inference (TI) is a cognitive task that assesses an organism's ability to infer novel relations between items based on previously acquired knowledge. TI is known for exhibiting various behavioral and neural signatures, such as the Serial Position Effect (SPE), Symbolic Distance Effect (SDE), and the brain's capacity to maintain and merge separate ranking models. We propose a novel framework that casts TI as a probabilistic preference learning task, using one-parameter Mallows models. We present a series of simulations that highlight the effectiveness of our novel approach. We show that the Mallows ranking model natively reproduces SDE and SPE. Furthermore, extending the model using Bayesian selection showcases its capacity to generate and merge ranking hypotheses as pairs with connecting symbols are encountered. Finally, we employ neural networks to replicate Mallows models, demonstrating how this framework aligns with observed prefrontal neural activity during TI. Our innovative approach sheds new light on the nature of TI, emphasizing the potential of probabilistic preference learning for unraveling its underlying neural mechanisms. |
1304.6796 | Adnan Sljoka | Adnan Sljoka and Derek Wilson | Probing Protein Ensemble Rigidity and Hydrogen-Deuterium exchange | 26 pages, 12 figures, Supplementary Information available at
http://www.math.yorku.ca/~adnanslj/SljokaWilsonSupplementary.pdf | null | 10.1088/1478-3975/10/5/056013 | null | q-bio.BM physics.bio-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein rigidity and flexibility can be analyzed accurately and efficiently
using the program FIRST. Previous studies using FIRST were designed to analyze
the rigidity and flexibility of proteins using a single static (snapshot)
structure. It is however well known that proteins can undergo spontaneous
sub-molecular unfolding and refolding, or conformational dynamics, even under
conditions that strongly favour a well-defined native structure. These (local)
unfolding events result in a large number of conformers that differ from each
other very slightly. In this context, proteins are better represented as a
thermodynamic ensemble of `native-like' structures, and not just as a single
static low-energy structure.
Working with this notion, we introduce a novel FIRST-based approach for
predicting rigidity/flexibility of the protein ensemble by (i) averaging the
hydrogen bonding strengths from the entire ensemble and (ii) by refining the
mathematical model of hydrogen bonds. Furthermore, we combine our
FIRST-ensemble rigidity predictions with the ensemble solvent accessibility
data of the backbone amides and propose a novel computational method which uses
both rigidity and solvent accessibility for predicting hydrogen-deuterium
exchange (HDX). To validate our predictions, we report a novel site specific
HDX experiment which characterizes the native structural ensemble of
Acylphosphatase from hyperthermophile Sulfolobus solfataricus (Sso AcP).
The sub-structural conformational dynamics that is observed by HDX data, is
closely matched with the FIRST-ensemble rigidity predictions, which could not
be attained using the traditional single `snapshot' rigidity analysis.
Moreover, the computational predictions of regions that are protected from HDX
and those that undergo exchange are in very good agreement with the
experimental HDX profile of Sso AcP.
| [
{
"created": "Thu, 25 Apr 2013 04:05:39 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Apr 2013 17:20:20 GMT",
"version": "v2"
},
{
"created": "Wed, 8 May 2013 03:28:28 GMT",
"version": "v3"
}
] | 2015-06-15 | [
[
"Sljoka",
"Adnan",
""
],
[
"Wilson",
"Derek",
""
]
] | Protein rigidity and flexibility can be analyzed accurately and efficiently using the program FIRST. Previous studies using FIRST were designed to analyze the rigidity and flexibility of proteins using a single static (snapshot) structure. It is however well known that proteins can undergo spontaneous sub-molecular unfolding and refolding, or conformational dynamics, even under conditions that strongly favour a well-defined native structure. These (local) unfolding events result in a large number of conformers that differ from each other very slightly. In this context, proteins are better represented as a thermodynamic ensemble of `native-like' structures, and not just as a single static low-energy structure. Working with this notion, we introduce a novel FIRST-based approach for predicting rigidity/flexibility of the protein ensemble by (i) averaging the hydrogen bonding strengths from the entire ensemble and (ii) by refining the mathematical model of hydrogen bonds. Furthermore, we combine our FIRST-ensemble rigidity predictions with the ensemble solvent accessibility data of the backbone amides and propose a novel computational method which uses both rigidity and solvent accessibility for predicting hydrogen-deuterium exchange (HDX). To validate our predictions, we report a novel site specific HDX experiment which characterizes the native structural ensemble of Acylphosphatase from hyperthermophile Sulfolobus solfataricus (Sso AcP). The sub-structural conformational dynamics that is observed by HDX data, is closely matched with the FIRST-ensemble rigidity predictions, which could not be attained using the traditional single `snapshot' rigidity analysis. Moreover, the computational predictions of regions that are protected from HDX and those that undergo exchange are in very good agreement with the experimental HDX profile of Sso AcP. |
2111.01969 | Yingying Wu | Yingying Wu, Shusheng Xu, Shing-Tung Yau, Yi Wu | PhyloTransformer: A Discriminative Model for Mutation Prediction Based
on a Multi-head Self-attention Mechanism | null | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has caused an
ongoing pandemic infecting 219 million people as of 10/19/21, with a 3.6%
mortality rate. Natural selection can generate favorable mutations with
improved fitness advantages; however, the identified coronaviruses may be the
tip of the iceberg, and potentially more fatal variants of concern (VOCs) may
emerge over time. Understanding the patterns of emerging VOCs and forecasting
mutations that may lead to gain of function or immune escape is urgently
required. Here we developed PhyloTransformer, a Transformer-based
discriminative model that engages a multi-head self-attention mechanism to
model genetic mutations that may lead to viral reproductive advantage. In order
to identify complex dependencies between the elements of each input sequence,
PhyloTransformer utilizes advanced modeling techniques, including a novel Fast
Attention Via positive Orthogonal Random features approach (FAVOR+) from
Performer, and the Masked Language Model (MLM) from Bidirectional Encoder
Representations from Transformers (BERT). PhyloTransformer was trained with
1,765,297 genetic sequences retrieved from the Global Initiative for Sharing
All Influenza Data (GISAID) database. Firstly, we compared the prediction
accuracy of novel mutations and novel combinations using extensive baseline
models; we found that PhyloTransformer outperformed every baseline method with
statistical significance. Secondly, we examined predictions of mutations in
each nucleotide of the receptor binding motif (RBM), and we found our
predictions were precise and accurate. Thirdly, we predicted modifications of
N-glycosylation sites to identify mutations associated with altered
glycosylation that may be favored during viral evolution. We anticipate that
PhyloTransformer may guide proactive vaccine design for effective targeting of
future SARS-CoV-2 variants.
| [
{
"created": "Wed, 3 Nov 2021 01:30:57 GMT",
"version": "v1"
}
] | 2021-11-04 | [
[
"Wu",
"Yingying",
""
],
[
"Xu",
"Shusheng",
""
],
[
"Yau",
"Shing-Tung",
""
],
[
"Wu",
"Yi",
""
]
] | Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has caused an ongoing pandemic infecting 219 million people as of 10/19/21, with a 3.6% mortality rate. Natural selection can generate favorable mutations with improved fitness advantages; however, the identified coronaviruses may be the tip of the iceberg, and potentially more fatal variants of concern (VOCs) may emerge over time. Understanding the patterns of emerging VOCs and forecasting mutations that may lead to gain of function or immune escape is urgently required. Here we developed PhyloTransformer, a Transformer-based discriminative model that engages a multi-head self-attention mechanism to model genetic mutations that may lead to viral reproductive advantage. In order to identify complex dependencies between the elements of each input sequence, PhyloTransformer utilizes advanced modeling techniques, including a novel Fast Attention Via positive Orthogonal Random features approach (FAVOR+) from Performer, and the Masked Language Model (MLM) from Bidirectional Encoder Representations from Transformers (BERT). PhyloTransformer was trained with 1,765,297 genetic sequences retrieved from the Global Initiative for Sharing All Influenza Data (GISAID) database. Firstly, we compared the prediction accuracy of novel mutations and novel combinations using extensive baseline models; we found that PhyloTransformer outperformed every baseline method with statistical significance. Secondly, we examined predictions of mutations in each nucleotide of the receptor binding motif (RBM), and we found our predictions were precise and accurate. Thirdly, we predicted modifications of N-glycosylation sites to identify mutations associated with altered glycosylation that may be favored during viral evolution. We anticipate that PhyloTransformer may guide proactive vaccine design for effective targeting of future SARS-CoV-2 variants. |
2302.05316 | Forrest Sheldon | Forrest Sheldon | Characterizing contaminant noise in barcoded perturbation experiments | 12 pages including supplemental material, 4 main text figures, 2
figures in supplemental material | null | null | null | q-bio.QM stat.AP | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Bursting cells lead to ambient RNA that contaminates sequencing data. This
process is especially problematic in perturbation experiments where
transcription factors are implanted into cells to determine their effects. The
presence of contaminants makes it difficult to determine whether a factor is
truly expressed in the cell. This paper studies the properties of contaminant
noise from an analytical perspective, showing that the cell bursting process
constrains the form of the noise distribution across factors. These constraints
can be leveraged to improve decontamination by removing counts that are more
likely the result of noise than expression. In two biological replicates of a
perturbation experiment, run across two sequencing protocols, decontaminated
counts agree with bulk genomic measurements of the transduction rate and are
automatically corrected for differences in sequencing.
| [
{
"created": "Fri, 10 Feb 2023 15:24:45 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Mar 2023 09:08:52 GMT",
"version": "v2"
},
{
"created": "Wed, 20 Sep 2023 08:53:20 GMT",
"version": "v3"
}
] | 2023-09-21 | [
[
"Sheldon",
"Forrest",
""
]
] | Bursting cells lead to ambient RNA that contaminates sequencing data. This process is especially problematic in perturbation experiments where transcription factors are implanted into cells to determine their effects. The presence of contaminants makes it difficult to determine whether a factor is truly expressed in the cell. This paper studies the properties of contaminant noise from an analytical perspective, showing that the cell bursting process constrains the form of the noise distribution across factors. These constraints can be leveraged to improve decontamination by removing counts that are more likely the result of noise than expression. In two biological replicates of a perturbation experiment, run across two sequencing protocols, decontaminated counts agree with bulk genomic measurements of the transduction rate and are automatically corrected for differences in sequencing. |
2004.07119 | Xiaojie Guo | Xiaojie Guo, Yuanqi Du, Sivani Tadepalli, Liang Zhao, and Amarda Shehu | Generating Tertiary Protein Structures via an Interpretative Variational
Autoencoder | null | null | null | null | q-bio.BM cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Much scientific enquiry across disciplines is founded upon a mechanistic
treatment of dynamic systems that ties form to function. A highly visible
instance of this is in molecular biology, where an important goal is to
determine functionally-relevant forms/structures that a protein molecule
employs to interact with molecular partners in the living cell. This goal is
typically pursued under the umbrella of stochastic optimization with algorithms
that optimize a scoring function. Research repeatedly shows that current
scoring function, though steadily improving, correlate weakly with molecular
activity. Inspired by recent momentum in generative deep learning, this paper
proposes and evaluates an alternative approach to generating
functionally-relevant three-dimensional structures of a protein. Though
typically deep generative models struggle with highly-structured data, the work
presented here circumvents this challenge via graph-generative models. A
comprehensive evaluation of several deep architectures shows the promise of
generative models in directly revealing the latent space for sampling novel
tertiary structures, as well as in highlighting axes/factors that carry
structural meaning and open the black box often associated with deep models.
The work presented here is a first step towards interpretative, deep generative
models becoming viable and informative complementary approaches to protein
structure prediction.
| [
{
"created": "Wed, 8 Apr 2020 17:40:21 GMT",
"version": "v1"
},
{
"created": "Wed, 16 Jun 2021 06:02:16 GMT",
"version": "v2"
}
] | 2021-06-17 | [
[
"Guo",
"Xiaojie",
""
],
[
"Du",
"Yuanqi",
""
],
[
"Tadepalli",
"Sivani",
""
],
[
"Zhao",
"Liang",
""
],
[
"Shehu",
"Amarda",
""
]
] | Much scientific enquiry across disciplines is founded upon a mechanistic treatment of dynamic systems that ties form to function. A highly visible instance of this is in molecular biology, where an important goal is to determine functionally-relevant forms/structures that a protein molecule employs to interact with molecular partners in the living cell. This goal is typically pursued under the umbrella of stochastic optimization with algorithms that optimize a scoring function. Research repeatedly shows that current scoring function, though steadily improving, correlate weakly with molecular activity. Inspired by recent momentum in generative deep learning, this paper proposes and evaluates an alternative approach to generating functionally-relevant three-dimensional structures of a protein. Though typically deep generative models struggle with highly-structured data, the work presented here circumvents this challenge via graph-generative models. A comprehensive evaluation of several deep architectures shows the promise of generative models in directly revealing the latent space for sampling novel tertiary structures, as well as in highlighting axes/factors that carry structural meaning and open the black box often associated with deep models. The work presented here is a first step towards interpretative, deep generative models becoming viable and informative complementary approaches to protein structure prediction. |
2006.06215 | Zhanhui Wang | Kai Wen, Yuchen Bai, Yujie Wei, Chenglong Li, Suxia Zhang, Jianzhong
Shen, Zhanhui Wang | Influence of Small Molecule Property on Antibody Response | null | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Antibodies with high titer and affinity to small molecule are critical in the
field for the development of vaccines against drugs of abuse, antidotes to
toxins and immunoassays for compounds. However, little is known regarding how
properties of small molecule influence and which chemical descriptor could
indicate the degree of the antibody response. Based on our previous study, we
designed and synthesized two groups of small molecules, called haptens, with
varied hydrophobicities to investigate the relationship between properties of
small molecules and antibody response in term of titer and affinity. We found
that the magnitude of the antibody response is positively correlated with the
degree of molecular hydrophobicity and related chemical descriptors. This study
provides insight into the immunological characteristics of small molecules
themselves and useful clues to produce high quality antibodies against small
molecules.
| [
{
"created": "Thu, 11 Jun 2020 06:05:12 GMT",
"version": "v1"
}
] | 2020-06-12 | [
[
"Wen",
"Kai",
""
],
[
"Bai",
"Yuchen",
""
],
[
"Wei",
"Yujie",
""
],
[
"Li",
"Chenglong",
""
],
[
"Zhang",
"Suxia",
""
],
[
"Shen",
"Jianzhong",
""
],
[
"Wang",
"Zhanhui",
""
]
] | Antibodies with high titer and affinity to small molecule are critical in the field for the development of vaccines against drugs of abuse, antidotes to toxins and immunoassays for compounds. However, little is known regarding how properties of small molecule influence and which chemical descriptor could indicate the degree of the antibody response. Based on our previous study, we designed and synthesized two groups of small molecules, called haptens, with varied hydrophobicities to investigate the relationship between properties of small molecules and antibody response in term of titer and affinity. We found that the magnitude of the antibody response is positively correlated with the degree of molecular hydrophobicity and related chemical descriptors. This study provides insight into the immunological characteristics of small molecules themselves and useful clues to produce high quality antibodies against small molecules. |
2201.02461 | Christoph Daube | Christoph Daube, Joachim Gross and Robin A. A. Ince | A whitening approach for Transfer Entropy permits the application to
narrow-band signals | null | null | null | null | q-bio.NC q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Transfer Entropy, a generalisation of Granger Causality, promises to measure
"information transfer" from a source to a target signal by ignoring
self-predictability of a target signal when quantifying the source-target
relationship. A simple example for signals with such self-predictability are
narrowband signals. These are both thought to be intrinsically generated by the
brain as well as commonly dealt with in analyses of brain signals, where
band-pass filters are used to separate responses from noise. However, the use
of Transfer Entropy is usually discouraged in such cases. We simulate
simplistic examples where we confirm the failure of classic implementations of
Transfer Entropy when applied to narrow-band signals, as made evident by a
flawed recovery of effect sizes and interaction delays. We propose an
alternative approach based on a whitening of the input signals before computing
a bivariate measure of directional time-lagged dependency. This approach solves
the problems found in the simple simulated systems. Finally, we explore the
behaviour of our measure when applied to delta and theta response components in
Magnetoencephalography (MEG) responses to continuous speech. The small effects
that our measure attributes to a directed interaction from the stimulus to the
neuronal responses are stronger in the theta than in the delta band. This
suggests that the delta band reflects a more predictive coupling, while the
theta band is stronger involved in bottom-up, reactive processing. Taken
together, we hope to increase the interest in directed perspectives on
frequency-specific dependencies.
| [
{
"created": "Fri, 7 Jan 2022 14:19:00 GMT",
"version": "v1"
},
{
"created": "Fri, 20 May 2022 09:24:51 GMT",
"version": "v2"
}
] | 2022-05-23 | [
[
"Daube",
"Christoph",
""
],
[
"Gross",
"Joachim",
""
],
[
"Ince",
"Robin A. A.",
""
]
] | Transfer Entropy, a generalisation of Granger Causality, promises to measure "information transfer" from a source to a target signal by ignoring self-predictability of a target signal when quantifying the source-target relationship. A simple example for signals with such self-predictability are narrowband signals. These are both thought to be intrinsically generated by the brain as well as commonly dealt with in analyses of brain signals, where band-pass filters are used to separate responses from noise. However, the use of Transfer Entropy is usually discouraged in such cases. We simulate simplistic examples where we confirm the failure of classic implementations of Transfer Entropy when applied to narrow-band signals, as made evident by a flawed recovery of effect sizes and interaction delays. We propose an alternative approach based on a whitening of the input signals before computing a bivariate measure of directional time-lagged dependency. This approach solves the problems found in the simple simulated systems. Finally, we explore the behaviour of our measure when applied to delta and theta response components in Magnetoencephalography (MEG) responses to continuous speech. The small effects that our measure attributes to a directed interaction from the stimulus to the neuronal responses are stronger in the theta than in the delta band. This suggests that the delta band reflects a more predictive coupling, while the theta band is stronger involved in bottom-up, reactive processing. Taken together, we hope to increase the interest in directed perspectives on frequency-specific dependencies. |
2307.06344 | QieHe Sun | Qiehe Sun, Jiawen Li, Jin Xu, Junru Cheng, Tian Guan, Yonghong He | The Whole Pathological Slide Classification via Weakly Supervised
Learning | null | null | null | null | q-bio.QM cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to its superior efficiency in utilizing annotations and addressing
gigapixel-sized images, multiple instance learning (MIL) has shown great
promise as a framework for whole slide image (WSI) classification in digital
pathology diagnosis. However, existing methods tend to focus on advanced
aggregators with different structures, often overlooking the intrinsic features
of H\&E pathological slides. To address this limitation, we introduced two
pathological priors: nuclear heterogeneity of diseased cells and spatial
correlation of pathological tiles. Leveraging the former, we proposed a data
augmentation method that utilizes stain separation during extractor training
via a contrastive learning strategy to obtain instance-level representations.
We then described the spatial relationships between the tiles using an
adjacency matrix. By integrating these two views, we designed a multi-instance
framework for analyzing H\&E-stained tissue images based on pathological
inductive bias, encompassing feature extraction, filtering, and aggregation.
Extensive experiments on the Camelyon16 breast dataset and TCGA-NSCLC Lung
dataset demonstrate that our proposed framework can effectively handle tasks
related to cancer detection and differentiation of subtypes, outperforming
state-of-the-art medical image classification methods based on MIL. The code
will be released later.
| [
{
"created": "Wed, 12 Jul 2023 16:14:23 GMT",
"version": "v1"
}
] | 2023-07-14 | [
[
"Sun",
"Qiehe",
""
],
[
"Li",
"Jiawen",
""
],
[
"Xu",
"Jin",
""
],
[
"Cheng",
"Junru",
""
],
[
"Guan",
"Tian",
""
],
[
"He",
"Yonghong",
""
]
] | Due to its superior efficiency in utilizing annotations and addressing gigapixel-sized images, multiple instance learning (MIL) has shown great promise as a framework for whole slide image (WSI) classification in digital pathology diagnosis. However, existing methods tend to focus on advanced aggregators with different structures, often overlooking the intrinsic features of H\&E pathological slides. To address this limitation, we introduced two pathological priors: nuclear heterogeneity of diseased cells and spatial correlation of pathological tiles. Leveraging the former, we proposed a data augmentation method that utilizes stain separation during extractor training via a contrastive learning strategy to obtain instance-level representations. We then described the spatial relationships between the tiles using an adjacency matrix. By integrating these two views, we designed a multi-instance framework for analyzing H\&E-stained tissue images based on pathological inductive bias, encompassing feature extraction, filtering, and aggregation. Extensive experiments on the Camelyon16 breast dataset and TCGA-NSCLC Lung dataset demonstrate that our proposed framework can effectively handle tasks related to cancer detection and differentiation of subtypes, outperforming state-of-the-art medical image classification methods based on MIL. The code will be released later. |
q-bio/0509016 | Jose Vilar | Jose M. G. Vilar, Ronald Jansen, and Chris Sander | Signal processing in the TGF-beta superfamily ligand-receptor network | 33 pages, 7 figures | PLoS Comput Biol. 2006 Jan;2(1):e3. Epub 2006 Jan 27. | 10.1371/journal.pcbi.0020003 | null | q-bio.MN nlin.AO physics.bio-ph q-bio.SC | null | The TGF-beta pathway plays a central role in tissue homeostasis and
morphogenesis. It transduces a variety of extracellular signals into
intracellular transcriptional responses that control a plethora of cellular
processes, including cell growth, apoptosis, and differentiation. We use
computational modeling to show that coupling of signaling with receptor
trafficking results in a highly versatile signal-processing unit, able to sense
by itself absolute levels of ligand, temporal changes in ligand concentration,
and ratios of multiple ligands. This coupling controls whether the response of
the receptor module is transient or permanent and whether or not different
signaling channels behave independently of each other. Our computational
approach unifies seemingly disparate experimental observations and suggests
specific changes in receptor trafficking patterns that can lead to phenotypes
that favor tumor progression.
| [
{
"created": "Wed, 14 Sep 2005 18:54:33 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Vilar",
"Jose M. G.",
""
],
[
"Jansen",
"Ronald",
""
],
[
"Sander",
"Chris",
""
]
] | The TGF-beta pathway plays a central role in tissue homeostasis and morphogenesis. It transduces a variety of extracellular signals into intracellular transcriptional responses that control a plethora of cellular processes, including cell growth, apoptosis, and differentiation. We use computational modeling to show that coupling of signaling with receptor trafficking results in a highly versatile signal-processing unit, able to sense by itself absolute levels of ligand, temporal changes in ligand concentration, and ratios of multiple ligands. This coupling controls whether the response of the receptor module is transient or permanent and whether or not different signaling channels behave independently of each other. Our computational approach unifies seemingly disparate experimental observations and suggests specific changes in receptor trafficking patterns that can lead to phenotypes that favor tumor progression. |
1209.3083 | Christoph Haselwandter | Christoph A. Haselwandter and Rob Phillips | Directional interactions and cooperativity between mechanosensitive
membrane proteins | null | EPL 101, 68002 (2013) | 10.1209/0295-5075/101/68002 | null | q-bio.BM cond-mat.soft physics.bio-ph q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While modern structural biology has provided us with a rich and diverse
picture of membrane proteins, the biological function of membrane proteins is
often influenced by the mechanical properties of the surrounding lipid bilayer.
Here we explore the relation between the shape of membrane proteins and the
cooperative function of membrane proteins induced by membrane-mediated elastic
interactions. For the experimental model system of mechanosensitive ion
channels we find that the sign and strength of elastic interactions depend on
the protein shape, yielding distinct cooperative gating curves for distinct
protein orientations. Our approach predicts how directional elastic
interactions affect the molecular structure, organization, and biological
function of proteins in crowded membranes.
| [
{
"created": "Fri, 14 Sep 2012 03:38:22 GMT",
"version": "v1"
},
{
"created": "Fri, 24 May 2013 02:42:58 GMT",
"version": "v2"
}
] | 2013-05-27 | [
[
"Haselwandter",
"Christoph A.",
""
],
[
"Phillips",
"Rob",
""
]
] | While modern structural biology has provided us with a rich and diverse picture of membrane proteins, the biological function of membrane proteins is often influenced by the mechanical properties of the surrounding lipid bilayer. Here we explore the relation between the shape of membrane proteins and the cooperative function of membrane proteins induced by membrane-mediated elastic interactions. For the experimental model system of mechanosensitive ion channels we find that the sign and strength of elastic interactions depend on the protein shape, yielding distinct cooperative gating curves for distinct protein orientations. Our approach predicts how directional elastic interactions affect the molecular structure, organization, and biological function of proteins in crowded membranes. |
2111.04547 | Randall Beer | Randall D. Beer | The Global Structure of Codimension-2 Local Bifurcations in
Continuous-Time Recurrent Neural Networks | null | Biological Cybernetics (2022) 116:501-515 | 10.1007/s00422-022-00938-5 | null | q-bio.NC cs.NE math.DS nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | If we are ever to move beyond the study of isolated special cases in
theoretical neuroscience, we need to develop more general theories of neural
circuits over a given neural model. The present paper considers this challenge
in the context of continuous-time recurrent neural networks (CTRNNs), a simple
but dynamically-universal model that has been widely utilized in both
computational neuroscience and neural networks. Here we extend previous work on
the parameter space structure of codimension-1 local bifurcations in CTRNNs to
include codimension-2 local bifurcation manifolds. Specifically, we derive the
necessary conditions for all generic local codimension-2 bifurcations for
general CTRNNs, specialize these conditions to circuits containing from one to
four neurons, illustrate in full detail the application of these conditions to
example circuits, derive closed-form expressions for these bifurcation
manifolds where possible, and demonstrate how this analysis allows us to find
and trace several global codimension-1 bifurcation manifolds that originate
from the codimension-2 bifurcations.
| [
{
"created": "Mon, 8 Nov 2021 14:56:18 GMT",
"version": "v1"
}
] | 2022-10-17 | [
[
"Beer",
"Randall D.",
""
]
] | If we are ever to move beyond the study of isolated special cases in theoretical neuroscience, we need to develop more general theories of neural circuits over a given neural model. The present paper considers this challenge in the context of continuous-time recurrent neural networks (CTRNNs), a simple but dynamically-universal model that has been widely utilized in both computational neuroscience and neural networks. Here we extend previous work on the parameter space structure of codimension-1 local bifurcations in CTRNNs to include codimension-2 local bifurcation manifolds. Specifically, we derive the necessary conditions for all generic local codimension-2 bifurcations for general CTRNNs, specialize these conditions to circuits containing from one to four neurons, illustrate in full detail the application of these conditions to example circuits, derive closed-form expressions for these bifurcation manifolds where possible, and demonstrate how this analysis allows us to find and trace several global codimension-1 bifurcation manifolds that originate from the codimension-2 bifurcations. |
1910.10210 | Stephen Eglen | Patrick W. Keeley, Stephen J. Eglen, Benjamin E. Reese | From Random to Regular: Variation in the Patterning of Retinal Mosaics | 11 Figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | The various types of retinal neurons are each positioned at their respective
depths within the retina where they are believed to be assembled as orderly
mosaics, in which like-type neurons minimize proximity to one another. Two
common statistical analyses for assessing the spatial properties of retinal
mosaics include the nearest neighbor analysis, from which an index of their
"regularity" is commonly calculated, and the density recovery profile derived
from auto-correlation analysis, revealing the presence of an exclusion zone
indicative of anti-clustering. While each of the spatial statistics derived
from these analyses, the regularity index and the effective radius, can be
useful in characterizing such properties of orderly retinal mosaics, they are
rarely sufficient for conveying the natural variation in the self-spacing
behavior of different types of retinal neurons and the extent to which that
behavior generates uniform intercellular spacing across the mosaic. We consider
the strengths and limitations of different spatial statistical analyses for
assessing the patterning in retinal mosaics, highlighting a number of
misconceptions and their frequent misuse. Rather than being diagnostic criteria
for determining simply whether a population is "regular", they should be
treated as descriptive statistics that convey variation in the factors that
influence neuronal positioning. We subsequently apply multiple spatial
statistics to the analysis of eight different mosaics in the mouse retina,
demonstrating conspicuous variability in the degree of patterning present, from
essentially random to notably regular. This variability in patterning has both
a developmental as well as a functional significance, reflecting the rules
governing the positioning of different types of neurons as the architecture of
the retina is assembled (abstract truncated).
| [
{
"created": "Tue, 22 Oct 2019 19:50:40 GMT",
"version": "v1"
}
] | 2019-10-24 | [
[
"Keeley",
"Patrick W.",
""
],
[
"Eglen",
"Stephen J.",
""
],
[
"Reese",
"Benjamin E.",
""
]
] | The various types of retinal neurons are each positioned at their respective depths within the retina where they are believed to be assembled as orderly mosaics, in which like-type neurons minimize proximity to one another. Two common statistical analyses for assessing the spatial properties of retinal mosaics include the nearest neighbor analysis, from which an index of their "regularity" is commonly calculated, and the density recovery profile derived from auto-correlation analysis, revealing the presence of an exclusion zone indicative of anti-clustering. While each of the spatial statistics derived from these analyses, the regularity index and the effective radius, can be useful in characterizing such properties of orderly retinal mosaics, they are rarely sufficient for conveying the natural variation in the self-spacing behavior of different types of retinal neurons and the extent to which that behavior generates uniform intercellular spacing across the mosaic. We consider the strengths and limitations of different spatial statistical analyses for assessing the patterning in retinal mosaics, highlighting a number of misconceptions and their frequent misuse. Rather than being diagnostic criteria for determining simply whether a population is "regular", they should be treated as descriptive statistics that convey variation in the factors that influence neuronal positioning. We subsequently apply multiple spatial statistics to the analysis of eight different mosaics in the mouse retina, demonstrating conspicuous variability in the degree of patterning present, from essentially random to notably regular. This variability in patterning has both a developmental as well as a functional significance, reflecting the rules governing the positioning of different types of neurons as the architecture of the retina is assembled (abstract truncated). |
1705.06147 | Roberto Cavoretto | Roberto Cavoretto, Alessandra De Rossi, Roberta Freda, Hanli Qiao,
Ezio Venturino | Numerical Methods for Pulmonary Image Registration | null | null | null | null | q-bio.QM math.NA physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to complexity and invisibility of human organs, diagnosticians need to
analyze medical images to determine where the lesion region is, and which kind
of disease is, in order to make precise diagnoses. For satisfying clinical
purposes through analyzing medical images, registration plays an essential
role. For instance, in Image-Guided Interventions (IGI) and computer-aided
surgeries, patient anatomy is registered to preoperative images to guide
surgeons complete procedures. Medical image registration is also very useful in
surgical planning, monitoring disease progression and for atlas construction.
Due to the significance, the theories, methods, and implementation method of
image registration constitute fundamental knowledge in educational training for
medical specialists. In this chapter, we focus on image registration of a
specific human organ, i.e. the lung, which is prone to be lesioned. For
pulmonary image registration, the improvement of the accuracy and how to obtain
it in order to achieve clinical purposes represents an important problem which
should seriously be addressed. In this chapter, we provide a survey which
focuses on the role of image registration in educational training together with
the state-of-the-art of pulmonary image registration. In the first part, we
describe clinical applications of image registration introducing artificial
organs in Simulation-based Education. In the second part, we summarize the
common methods used in pulmonary image registration and analyze popular papers
to obtain a survey of pulmonary image registration.
| [
{
"created": "Tue, 16 May 2017 06:42:08 GMT",
"version": "v1"
}
] | 2017-05-18 | [
[
"Cavoretto",
"Roberto",
""
],
[
"De Rossi",
"Alessandra",
""
],
[
"Freda",
"Roberta",
""
],
[
"Qiao",
"Hanli",
""
],
[
"Venturino",
"Ezio",
""
]
] | Due to complexity and invisibility of human organs, diagnosticians need to analyze medical images to determine where the lesion region is, and which kind of disease is, in order to make precise diagnoses. For satisfying clinical purposes through analyzing medical images, registration plays an essential role. For instance, in Image-Guided Interventions (IGI) and computer-aided surgeries, patient anatomy is registered to preoperative images to guide surgeons complete procedures. Medical image registration is also very useful in surgical planning, monitoring disease progression and for atlas construction. Due to the significance, the theories, methods, and implementation method of image registration constitute fundamental knowledge in educational training for medical specialists. In this chapter, we focus on image registration of a specific human organ, i.e. the lung, which is prone to be lesioned. For pulmonary image registration, the improvement of the accuracy and how to obtain it in order to achieve clinical purposes represents an important problem which should seriously be addressed. In this chapter, we provide a survey which focuses on the role of image registration in educational training together with the state-of-the-art of pulmonary image registration. In the first part, we describe clinical applications of image registration introducing artificial organs in Simulation-based Education. In the second part, we summarize the common methods used in pulmonary image registration and analyze popular papers to obtain a survey of pulmonary image registration. |
1105.1217 | Pascal Grange | Pascal Grange and Partha P. Mitra | Marker Genes for Anatomical Regions in the Brain: Insights from the
Allen Gene Expression Atlas | 26 pages, LaTeX | null | null | null | q-bio.QM q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantitative criteria are proposed to identify genes (and sets of genes)
whose expression marks a specific brain region (or a set of brain regions).
Gene-expression energies, obtained for thousands of mouse genes by numerization
of in-situ hybridization images in the Allen Gene Expression Atlas, are used to
test these methods in the mouse brain. Individual genes are ranked using
integrals of their expression energies across brain regions. The ranking is
generalized to sets of genes and the problem of optimal markers of a classical
region receives a linear-algebraic solution. Moreover, the goodness of the
fitting of the expression profile of a gene to the profile of a brain region is
closely related to the co-expression of genes. The geometric interpretation of
this fact leads to a quantitative criterion to detect markers of pairs of brain
regions. Local properties of the gene-expression profiles are also used to
detect genes that separate a given grain region from its environment.
| [
{
"created": "Fri, 6 May 2011 02:56:45 GMT",
"version": "v1"
}
] | 2011-05-09 | [
[
"Grange",
"Pascal",
""
],
[
"Mitra",
"Partha P.",
""
]
] | Quantitative criteria are proposed to identify genes (and sets of genes) whose expression marks a specific brain region (or a set of brain regions). Gene-expression energies, obtained for thousands of mouse genes by numerization of in-situ hybridization images in the Allen Gene Expression Atlas, are used to test these methods in the mouse brain. Individual genes are ranked using integrals of their expression energies across brain regions. The ranking is generalized to sets of genes and the problem of optimal markers of a classical region receives a linear-algebraic solution. Moreover, the goodness of the fitting of the expression profile of a gene to the profile of a brain region is closely related to the co-expression of genes. The geometric interpretation of this fact leads to a quantitative criterion to detect markers of pairs of brain regions. Local properties of the gene-expression profiles are also used to detect genes that separate a given grain region from its environment. |
0902.3133 | Claude Pasquier | Claude Pasquier, Vasilis Promponas, Giorgos Palaios, Ioannis
Hamodrakas, Stavros Hamodrakas | A novel method for predicting transmembrane segments in proteins based
on a statistical analysis of the SwissProt database: the PRED-TMR algorithm | null | Protein Engineering Design & Selection, 1999, 12 (5), pp.381-5 | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel method that predicts transmembrane domains in proteins
using solely information contained in the sequence itself. The PRED-TMR
algorithm described, refines a standard hydrophobicity analysis with a
detection of potential termini ('edges', starts and ends) of transmembrane
regions. This allows one both to discard highly hydrophobic regions not
delimited by clear start and end configurations and to confirm putative
transmembrane segments not distinguishable by their hydrophobic composition.
The accuracy obtained on a test set of 101 non-homologous transmembrane
proteins with reliable topologies compares well with that of other popular
existing methods. Only a slight decrease in prediction accuracy was observed
when the algorithm was applied to all transmembrane proteins of the SwissProt
database (release 35). A WWW server running the PRED-TMR algorithm is available
at http://o2.db.uoa. gr/PRED-TMR/
| [
{
"created": "Wed, 18 Feb 2009 13:23:03 GMT",
"version": "v1"
},
{
"created": "Mon, 9 May 2016 07:39:16 GMT",
"version": "v2"
}
] | 2016-05-10 | [
[
"Pasquier",
"Claude",
""
],
[
"Promponas",
"Vasilis",
""
],
[
"Palaios",
"Giorgos",
""
],
[
"Hamodrakas",
"Ioannis",
""
],
[
"Hamodrakas",
"Stavros",
""
]
] | We present a novel method that predicts transmembrane domains in proteins using solely information contained in the sequence itself. The PRED-TMR algorithm described, refines a standard hydrophobicity analysis with a detection of potential termini ('edges', starts and ends) of transmembrane regions. This allows one both to discard highly hydrophobic regions not delimited by clear start and end configurations and to confirm putative transmembrane segments not distinguishable by their hydrophobic composition. The accuracy obtained on a test set of 101 non-homologous transmembrane proteins with reliable topologies compares well with that of other popular existing methods. Only a slight decrease in prediction accuracy was observed when the algorithm was applied to all transmembrane proteins of the SwissProt database (release 35). A WWW server running the PRED-TMR algorithm is available at http://o2.db.uoa. gr/PRED-TMR/ |
2005.04069 | Ziyuan Zhao | Jiapan Gu, Ziyuan Zhao, Zeng Zeng, Yuzhe Wang, Zhengyiren Qiu,
Bharadwaj Veeravalli, Brian Kim Poh Goh, Glenn Kunnath Bonney, Krishnakumar
Madhavan, Chan Wan Ying, Lim Kheng Choon, Thng Choon Hua, Pierce KH Chow | Multi-Phase Cross-modal Learning for Noninvasive Gene Mutation
Prediction in Hepatocellular Carcinoma | Accepted version to be published in the 42nd IEEE Annual
International Conference of the IEEE Engineering in Medicine and Biology
Society, EMBC 2020, Montreal, Canada | 2020 42nd Annual International Conference of the IEEE Engineering
in Medicine & Biology Society (EMBC) | 10.1109/EMBC44109.2020.9176677 | null | q-bio.QM cs.CV eess.IV q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hepatocellular carcinoma (HCC) is the most common type of primary liver
cancer and the fourth most common cause of cancer-related death worldwide.
Understanding the underlying gene mutations in HCC provides great prognostic
value for treatment planning and targeted therapy. Radiogenomics has revealed
an association between non-invasive imaging features and molecular genomics.
However, imaging feature identification is laborious and error-prone. In this
paper, we propose an end-to-end deep learning framework for mutation prediction
in APOB, COL11A1 and ATRX genes using multiphasic CT scans. Considering
intra-tumour heterogeneity (ITH) in HCC, multi-region sampling technology is
implemented to generate the dataset for experiments. Experimental results
demonstrate the effectiveness of the proposed model.
| [
{
"created": "Fri, 8 May 2020 14:36:59 GMT",
"version": "v1"
}
] | 2022-03-24 | [
[
"Gu",
"Jiapan",
""
],
[
"Zhao",
"Ziyuan",
""
],
[
"Zeng",
"Zeng",
""
],
[
"Wang",
"Yuzhe",
""
],
[
"Qiu",
"Zhengyiren",
""
],
[
"Veeravalli",
"Bharadwaj",
""
],
[
"Goh",
"Brian Kim Poh",
""
],
[
"Bonney",
"Glenn Kunnath",
""
],
[
"Madhavan",
"Krishnakumar",
""
],
[
"Ying",
"Chan Wan",
""
],
[
"Choon",
"Lim Kheng",
""
],
[
"Hua",
"Thng Choon",
""
],
[
"Chow",
"Pierce KH",
""
]
] | Hepatocellular carcinoma (HCC) is the most common type of primary liver cancer and the fourth most common cause of cancer-related death worldwide. Understanding the underlying gene mutations in HCC provides great prognostic value for treatment planning and targeted therapy. Radiogenomics has revealed an association between non-invasive imaging features and molecular genomics. However, imaging feature identification is laborious and error-prone. In this paper, we propose an end-to-end deep learning framework for mutation prediction in APOB, COL11A1 and ATRX genes using multiphasic CT scans. Considering intra-tumour heterogeneity (ITH) in HCC, multi-region sampling technology is implemented to generate the dataset for experiments. Experimental results demonstrate the effectiveness of the proposed model. |
2011.05294 | Mike Steel Prof. | Mike Steel | Modelling aspects of consciousness: a topological perspective | 13 pages, 1 figure, (version 5 is a slightly revised version of
version 4, which was a moderately revised version of 3 (which was identical
to versions 1 and 2, except that the figure was reduced in size to allow
easier downloads) | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Attention Schema Theory (AST) is a recent proposal to provide a scientific
explanation for the basis of subjective awareness. In AST, the brain constructs
a representation of attention taking place in its own (and others') mind (`the
attention schema'). Moreover, this representation is incomplete for efficiency
reasons. This inherent incompleteness of the attention schema results in the
inability of humans to understand how their own subjective awareness arises
(related to the so-called `hard problem' of consciousness). Given this theory,
the present paper asks whether a mind (either human or machine-based) that
incorporates attention, and that contains a representation of its own
attention, can ever have a complete representation. Using a simple yet general
model and a mathematical argument based on classical topology, we show that a
complete representation of attention is not possible, since it cannot
faithfully represent streams of attention. In this way, the study supports one
of the core aspects of AST, that the brain's representation of its own
attention is necessarily incomplete.
| [
{
"created": "Tue, 10 Nov 2020 18:32:00 GMT",
"version": "v1"
},
{
"created": "Wed, 11 Nov 2020 04:08:38 GMT",
"version": "v2"
},
{
"created": "Sun, 22 Nov 2020 03:42:26 GMT",
"version": "v3"
},
{
"created": "Fri, 9 Apr 2021 05:34:35 GMT",
"version": "v4"
},
{
"created": "Tue, 27 Apr 2021 02:40:44 GMT",
"version": "v5"
}
] | 2021-04-28 | [
[
"Steel",
"Mike",
""
]
] | Attention Schema Theory (AST) is a recent proposal to provide a scientific explanation for the basis of subjective awareness. In AST, the brain constructs a representation of attention taking place in its own (and others') mind (`the attention schema'). Moreover, this representation is incomplete for efficiency reasons. This inherent incompleteness of the attention schema results in the inability of humans to understand how their own subjective awareness arises (related to the so-called `hard problem' of consciousness). Given this theory, the present paper asks whether a mind (either human or machine-based) that incorporates attention, and that contains a representation of its own attention, can ever have a complete representation. Using a simple yet general model and a mathematical argument based on classical topology, we show that a complete representation of attention is not possible, since it cannot faithfully represent streams of attention. In this way, the study supports one of the core aspects of AST, that the brain's representation of its own attention is necessarily incomplete. |
2408.03240 | Gabriel Palma | Jo\~ao C\'esar Reis Alves, Gabriel Rodrigues Palma, Idemauro Antonio
Rodrigues de Lara | Beta regression mixed model applied to sensory analysis | 13 pages | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Sensory analysis is an important area that the food industry can use to
innovate and improve its products. This study involves a sample of individuals
who can be trained or not to assess a product using a hedonic scale or notes,
where the experimental design is a balanced incomplete block design. In this
context, integrating sensory analysis with effective statistical methods, which
consider the nature of the response variables, is essential to answer the aim
of the experimental study. Some techniques are available to analyse sensory
data, such as response surface models or categorical models. This article
proposes using beta regression as an alternative to the proportional odds
model, addressing some convergence problems, especially regarding the number of
parameters. Moreover, the beta distribution is flexible for heteroscedasticity
and asymmetry data. To this end, we conducted simulation studies that showed
agreement rates in product selection using both models. Also, we presented a
motivational study that was developed to select prebiotic drinks based on
cashew nuts added to grape juice. In this application, the beta regression
mixed model results corroborated with the selected formulations using the
proportional mixed model.
| [
{
"created": "Tue, 6 Aug 2024 15:01:56 GMT",
"version": "v1"
}
] | 2024-08-07 | [
[
"Alves",
"João César Reis",
""
],
[
"Palma",
"Gabriel Rodrigues",
""
],
[
"de Lara",
"Idemauro Antonio Rodrigues",
""
]
] | Sensory analysis is an important area that the food industry can use to innovate and improve its products. This study involves a sample of individuals who can be trained or not to assess a product using a hedonic scale or notes, where the experimental design is a balanced incomplete block design. In this context, integrating sensory analysis with effective statistical methods, which consider the nature of the response variables, is essential to answer the aim of the experimental study. Some techniques are available to analyse sensory data, such as response surface models or categorical models. This article proposes using beta regression as an alternative to the proportional odds model, addressing some convergence problems, especially regarding the number of parameters. Moreover, the beta distribution is flexible for heteroscedasticity and asymmetry data. To this end, we conducted simulation studies that showed agreement rates in product selection using both models. Also, we presented a motivational study that was developed to select prebiotic drinks based on cashew nuts added to grape juice. In this application, the beta regression mixed model results corroborated with the selected formulations using the proportional mixed model. |
1406.2244 | Jae Kyoung Kim | Jae Kyoung Kim, Kre\v{s}imir Josi\'c, and Matthew R. Bennett | The validity of quasi steady-state approximations in discrete stochastic
simulations | 21 pages, 4 figures | null | 10.1016/j.bpj.2014.06.012 | null | q-bio.MN physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In biochemical networks, reactions often occur on disparate timescales and
can be characterized as either "fast" or "slow." The quasi-steady state
approximation (QSSA) utilizes timescale separation to project models of
biochemical networks onto lower-dimensional slow manifolds. As a result, fast
elementary reactions are not modeled explicitly, and their effect is captured
by non-elementary reaction rate functions (e.g. Hill functions). The accuracy
of the QSSA applied to deterministic systems depends on how well timescales are
separated. Recently, it has been proposed to use the non-elementary rate
functions obtained via the deterministic QSSA to define propensity functions in
stochastic simulations of biochemical networks. In this approach, termed the
stochastic QSSA, fast reactions that are part of non-elementary reactions are
not simulated, greatly reducing computation time. However, it is unclear when
the stochastic QSSA provides an accurate approximation of the original
stochastic simulation. We show that, unlike the deterministic QSSA, the
validity of the stochastic QSSA does not follow from timescale separation
alone, but also depends on the sensitivity of the non-elementary reaction rate
functions to changes in the slow species. The stochastic QSSA becomes more
accurate when this sensitivity is small. Different types of QSSAs result in
non-elementary functions with different sensitivities, and the total QSSA
results in less sensitive functions than the standard or the pre-factor QSSA.
We prove that, as a result, the stochastic QSSA becomes more accurate when
non-elementary reaction functions are obtained using the total QSSA. Our work
provides a novel condition for the validity of the QSSA in stochastic
simulations of biochemical reaction networks with disparate timescales.
| [
{
"created": "Mon, 9 Jun 2014 17:16:12 GMT",
"version": "v1"
}
] | 2015-06-19 | [
[
"Kim",
"Jae Kyoung",
""
],
[
"Josić",
"Krešimir",
""
],
[
"Bennett",
"Matthew R.",
""
]
] | In biochemical networks, reactions often occur on disparate timescales and can be characterized as either "fast" or "slow." The quasi-steady state approximation (QSSA) utilizes timescale separation to project models of biochemical networks onto lower-dimensional slow manifolds. As a result, fast elementary reactions are not modeled explicitly, and their effect is captured by non-elementary reaction rate functions (e.g. Hill functions). The accuracy of the QSSA applied to deterministic systems depends on how well timescales are separated. Recently, it has been proposed to use the non-elementary rate functions obtained via the deterministic QSSA to define propensity functions in stochastic simulations of biochemical networks. In this approach, termed the stochastic QSSA, fast reactions that are part of non-elementary reactions are not simulated, greatly reducing computation time. However, it is unclear when the stochastic QSSA provides an accurate approximation of the original stochastic simulation. We show that, unlike the deterministic QSSA, the validity of the stochastic QSSA does not follow from timescale separation alone, but also depends on the sensitivity of the non-elementary reaction rate functions to changes in the slow species. The stochastic QSSA becomes more accurate when this sensitivity is small. Different types of QSSAs result in non-elementary functions with different sensitivities, and the total QSSA results in less sensitive functions than the standard or the pre-factor QSSA. We prove that, as a result, the stochastic QSSA becomes more accurate when non-elementary reaction functions are obtained using the total QSSA. Our work provides a novel condition for the validity of the QSSA in stochastic simulations of biochemical reaction networks with disparate timescales. |
2207.11182 | Youssef Kora | Salma Salhi, Youssef Kora, Gisu Ham, Hadi Zadeh Haghighi, and
Christoph Simon | Network analysis of the human structural connectome including the
brainstem | 25 pages, 7 figures | null | 10.1371/journal.pone.0272688 | null | q-bio.NC cond-mat.dis-nn physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The underlying anatomical structure is fundamental to the study of brain
networks, but the role of brainstem from a structural perspective is not very
well understood. We conduct a computational and graph-theoretical study of the
human structural connectome incorporating a variety of subcortical structures
including the brainstem. Our computational scheme involves the use of Python
DIPY and Nibabel libraries to develop structural connectomes using 100 healthy
adult subjects. We then compute degree, eigenvector, and betweenness
centralities to identify several highly connected structures and find that the
brainstem ranks highest across all examined metrics, a result that holds even
when the connectivity matrix is normalized by volume. We also investigated some
global topological features in the connectomes, such as the balance of
integration and segregation, and found that the domination of the brainstem
generally causes networks to become less integrated and segregated. Our results
highlight the importance of including the brainstem in structural network
analyses.
| [
{
"created": "Fri, 22 Jul 2022 16:43:08 GMT",
"version": "v1"
},
{
"created": "Mon, 30 Jan 2023 03:28:57 GMT",
"version": "v2"
}
] | 2023-04-26 | [
[
"Salhi",
"Salma",
""
],
[
"Kora",
"Youssef",
""
],
[
"Ham",
"Gisu",
""
],
[
"Haghighi",
"Hadi Zadeh",
""
],
[
"Simon",
"Christoph",
""
]
] | The underlying anatomical structure is fundamental to the study of brain networks, but the role of brainstem from a structural perspective is not very well understood. We conduct a computational and graph-theoretical study of the human structural connectome incorporating a variety of subcortical structures including the brainstem. Our computational scheme involves the use of Python DIPY and Nibabel libraries to develop structural connectomes using 100 healthy adult subjects. We then compute degree, eigenvector, and betweenness centralities to identify several highly connected structures and find that the brainstem ranks highest across all examined metrics, a result that holds even when the connectivity matrix is normalized by volume. We also investigated some global topological features in the connectomes, such as the balance of integration and segregation, and found that the domination of the brainstem generally causes networks to become less integrated and segregated. Our results highlight the importance of including the brainstem in structural network analyses. |
1712.05906 | Seung Ki Baek | Taekho You, Minji Kwon, Hang-Hyun Jo, Woo-Sung Jung, and Seung Ki Baek | Chaos and unpredictability in evolution of cooperation in continuous
time | 9 pages; 3 figures | null | 10.1103/PhysRevE.96.062310 | null | q-bio.PE nlin.CD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cooperators benefit others with paying costs. Evolution of cooperation
crucially depends on the cost-benefit ratio of cooperation, denoted as $c$. In
this work, we investigate the infinitely repeated prisoner's dilemma for
various values of $c$ with four of the representative memory-one strategies,
i.e., unconditional cooperation, unconditional defection, tit-for-tat, and
win-stay-lose-shift. We consider replicator dynamics which deterministically
describes how the fraction of each strategy evolves over time in an
infinite-sized well-mixed population in the presence of implementation error
and mutation among the four strategies. Our finding is that this
three-dimensional continuous-time dynamics exhibits chaos through a bifurcation
sequence similar to that of a logistic map as $c$ varies. If mutation occurs
with rate $\mu \ll 1$, the position of the bifurcation sequence on the $c$ axis
is numerically found to scale as $\mu^{0.1}$, and such sensitivity to $\mu$
suggests that mutation may have non-perturbative effects on evolutionary paths.
It demonstrates how the microscopic randomness of the mutation process can be
amplified to macroscopic unpredictability by evolutionary dynamics.
| [
{
"created": "Sat, 16 Dec 2017 05:30:26 GMT",
"version": "v1"
}
] | 2017-12-19 | [
[
"You",
"Taekho",
""
],
[
"Kwon",
"Minji",
""
],
[
"Jo",
"Hang-Hyun",
""
],
[
"Jung",
"Woo-Sung",
""
],
[
"Baek",
"Seung Ki",
""
]
] | Cooperators benefit others with paying costs. Evolution of cooperation crucially depends on the cost-benefit ratio of cooperation, denoted as $c$. In this work, we investigate the infinitely repeated prisoner's dilemma for various values of $c$ with four of the representative memory-one strategies, i.e., unconditional cooperation, unconditional defection, tit-for-tat, and win-stay-lose-shift. We consider replicator dynamics which deterministically describes how the fraction of each strategy evolves over time in an infinite-sized well-mixed population in the presence of implementation error and mutation among the four strategies. Our finding is that this three-dimensional continuous-time dynamics exhibits chaos through a bifurcation sequence similar to that of a logistic map as $c$ varies. If mutation occurs with rate $\mu \ll 1$, the position of the bifurcation sequence on the $c$ axis is numerically found to scale as $\mu^{0.1}$, and such sensitivity to $\mu$ suggests that mutation may have non-perturbative effects on evolutionary paths. It demonstrates how the microscopic randomness of the mutation process can be amplified to macroscopic unpredictability by evolutionary dynamics. |
2310.13803 | Teresa Lo | Teresa W. Lo, Han James Choi, Dean Huang, Paul A. Wiggins | Noise robustness and metabolic load determine the principles of central
dogma regulation | Main: 10 pages, 6 figures; Supplemental Material: 27 pages, 12
figures | null | null | null | q-bio.MN physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | The processes of gene expression are inherently stochastic, even for
essential genes required for growth. How does the cell maximize fitness in
light of noise? To answer this question, we build a mathematical model to
explore the trade-off between metabolic load and growth robustness. The model
predicts novel principles of central dogma regulation: Optimal protein
expression levels for many genes are in vast overabundance. Essential genes are
transcribed above a lower limit of one message per cell cycle. Gene expression
is achieved by load balancing between transcription and translation. We present
evidence that each of these novel regulatory principles is observed. These
results reveal that robustness and metabolic load determine the global
regulatory principles that govern gene expression processes, and these
principles have broad implications for cellular function.
| [
{
"created": "Fri, 20 Oct 2023 20:30:52 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Jan 2024 23:13:36 GMT",
"version": "v2"
},
{
"created": "Thu, 23 May 2024 22:36:04 GMT",
"version": "v3"
},
{
"created": "Thu, 15 Aug 2024 17:50:59 GMT",
"version": "v4"
}
] | 2024-08-16 | [
[
"Lo",
"Teresa W.",
""
],
[
"Choi",
"Han James",
""
],
[
"Huang",
"Dean",
""
],
[
"Wiggins",
"Paul A.",
""
]
] | The processes of gene expression are inherently stochastic, even for essential genes required for growth. How does the cell maximize fitness in light of noise? To answer this question, we build a mathematical model to explore the trade-off between metabolic load and growth robustness. The model predicts novel principles of central dogma regulation: Optimal protein expression levels for many genes are in vast overabundance. Essential genes are transcribed above a lower limit of one message per cell cycle. Gene expression is achieved by load balancing between transcription and translation. We present evidence that each of these novel regulatory principles is observed. These results reveal that robustness and metabolic load determine the global regulatory principles that govern gene expression processes, and these principles have broad implications for cellular function. |
2312.12678 | Eric Rawls | Eric Rawls, Bryan Andrews, Kelvin Lim, Erich Kummerfeld | Causal Discovery for fMRI data: Challenges, Solutions, and a Case Study | null | null | null | null | q-bio.QM cs.AI cs.LG stat.ME stat.ML | http://creativecommons.org/licenses/by/4.0/ | Designing studies that apply causal discovery requires navigating many
researcher degrees of freedom. This complexity is exacerbated when the study
involves fMRI data. In this paper we (i) describe nine challenges that occur
when applying causal discovery to fMRI data, (ii) discuss the space of
decisions that need to be made, (iii) review how a recent case study made those
decisions, (iv) and identify existing gaps that could potentially be solved by
the development of new methods. Overall, causal discovery is a promising
approach for analyzing fMRI data, and multiple successful applications have
indicated that it is superior to traditional fMRI functional connectivity
methods, but current causal discovery methods for fMRI leave room for
improvement.
| [
{
"created": "Wed, 20 Dec 2023 00:33:26 GMT",
"version": "v1"
}
] | 2023-12-21 | [
[
"Rawls",
"Eric",
""
],
[
"Andrews",
"Bryan",
""
],
[
"Lim",
"Kelvin",
""
],
[
"Kummerfeld",
"Erich",
""
]
] | Designing studies that apply causal discovery requires navigating many researcher degrees of freedom. This complexity is exacerbated when the study involves fMRI data. In this paper we (i) describe nine challenges that occur when applying causal discovery to fMRI data, (ii) discuss the space of decisions that need to be made, (iii) review how a recent case study made those decisions, (iv) and identify existing gaps that could potentially be solved by the development of new methods. Overall, causal discovery is a promising approach for analyzing fMRI data, and multiple successful applications have indicated that it is superior to traditional fMRI functional connectivity methods, but current causal discovery methods for fMRI leave room for improvement. |
q-bio/0405022 | Erik van Nimwegen | Erik van Nimwegen | Scaling laws in the functional content of genomes: Fundamental constants
of evolution? | to appear in "Power Laws, Scale-free Networks and Genome Biology", E.
Koonin et al. (eds.), Landes Bioscience (2004) | null | null | null | q-bio.GN q-bio.MN | null | With the number of fully-sequenced genomes now well over a hundred it has
become possible to start investigating if there are any quantitative
regularities in the genetic make-up of genomes. In (physics/0307001), I
originally showed that the numbers of genes in different functional categories
scale as power laws in the total number of genes in the genome. In this chapter
I revisit these results with more recent data and go into considerable more
depth regarding the implications of these scaling laws for our understanding of
the regulatory design of cells. In addition, I further develop the evolutionary
model first proposed in (physics/0307001), which suggests that the exponents of
the observed scaling laws correspond to fundamental constants of the
evolutionary process. In particular, I put forward an hypothesis for the
approximately quadratic scaling of regulatory and signal transducing genes with
genome size.
| [
{
"created": "Wed, 26 May 2004 21:13:49 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"van Nimwegen",
"Erik",
""
]
] | With the number of fully-sequenced genomes now well over a hundred it has become possible to start investigating if there are any quantitative regularities in the genetic make-up of genomes. In (physics/0307001), I originally showed that the numbers of genes in different functional categories scale as power laws in the total number of genes in the genome. In this chapter I revisit these results with more recent data and go into considerable more depth regarding the implications of these scaling laws for our understanding of the regulatory design of cells. In addition, I further develop the evolutionary model first proposed in (physics/0307001), which suggests that the exponents of the observed scaling laws correspond to fundamental constants of the evolutionary process. In particular, I put forward an hypothesis for the approximately quadratic scaling of regulatory and signal transducing genes with genome size. |
1509.00411 | Ewelina Kubicz | E. Kubicz, B. Jasi\'nska, B. Zgardzi\'nska, T. Bednarski, P.
Bia{\l}as, E. Czerwi\'nski, A. Gajos, M. Gorgol, D. Kami\'nska, {\L}.
Kap{\l}on, A. Kochanowski, G. Korcyl, P. Kowalski, T. Kozik, W. Krzemie\'n,
S. Nied\'zwiecki, M. Pa{\l}ka, L. Raczy\'nski, Z. Rajfur, Z. Rudy, O. Rundel,
N.G. Sharma, M. Silarski, A. S{\l}omski, A. Strzelecki, A. Wieczorek, W.
Wi\'slicki, M. Zieli\'nski, P. Moskal | Studies of unicellular micro-organisms Saccharomyces cerevisiae by means
of Positron Annihilation Lifetime Spectroscopy | Nukleonika (2015) | Nukleonika 60(4) (2015) 749-753 | 10.1515/nuka-2015-0135 | null | q-bio.OT physics.ins-det | http://creativecommons.org/publicdomain/zero/1.0/ | Results of Positron Annihilation Lifetime Spectroscopy (PALS) and microscopic
studies on simple microorganisms: brewing yeasts are presented. Lifetime of
ortho - positronium (o-Ps) were found to change from 2.4 to 2.9 ns (longer
lived component) for lyophilised and aqueous yeasts, respectively. Also
hygroscopicity of yeasts in time was examined, allowing to check how water -
the main component of the cell - affects PALS parameters, thus lifetime of o-Ps
were found to change from 1.2 to 1.4 ns (shorter lived component) for the dried
yeasts. The time sufficient to hydrate the cells was found below 10 hours. In
the presence of liquid water an indication of reorganization of yeast in the
molecular scale was observed.
Microscopic images of the lyophilised, dried and wet yeasts with best
possible resolution were obtained using Inverted Microscopy (IM) and
Environmental Scanning Electron Microscopy (ESEM) methods. As a result visible
changes to the surface of the cell membrane were observed in ESEM images.
| [
{
"created": "Sun, 30 Aug 2015 21:10:42 GMT",
"version": "v1"
}
] | 2016-02-17 | [
[
"Kubicz",
"E.",
""
],
[
"Jasińska",
"B.",
""
],
[
"Zgardzińska",
"B.",
""
],
[
"Bednarski",
"T.",
""
],
[
"Białas",
"P.",
""
],
[
"Czerwiński",
"E.",
""
],
[
"Gajos",
"A.",
""
],
[
"Gorgol",
"M.",
""
],
[
"Kamińska",
"D.",
""
],
[
"Kapłon",
"Ł.",
""
],
[
"Kochanowski",
"A.",
""
],
[
"Korcyl",
"G.",
""
],
[
"Kowalski",
"P.",
""
],
[
"Kozik",
"T.",
""
],
[
"Krzemień",
"W.",
""
],
[
"Niedźwiecki",
"S.",
""
],
[
"Pałka",
"M.",
""
],
[
"Raczyński",
"L.",
""
],
[
"Rajfur",
"Z.",
""
],
[
"Rudy",
"Z.",
""
],
[
"Rundel",
"O.",
""
],
[
"Sharma",
"N. G.",
""
],
[
"Silarski",
"M.",
""
],
[
"Słomski",
"A.",
""
],
[
"Strzelecki",
"A.",
""
],
[
"Wieczorek",
"A.",
""
],
[
"Wiślicki",
"W.",
""
],
[
"Zieliński",
"M.",
""
],
[
"Moskal",
"P.",
""
]
] | Results of Positron Annihilation Lifetime Spectroscopy (PALS) and microscopic studies on simple microorganisms: brewing yeasts are presented. Lifetime of ortho - positronium (o-Ps) were found to change from 2.4 to 2.9 ns (longer lived component) for lyophilised and aqueous yeasts, respectively. Also hygroscopicity of yeasts in time was examined, allowing to check how water - the main component of the cell - affects PALS parameters, thus lifetime of o-Ps were found to change from 1.2 to 1.4 ns (shorter lived component) for the dried yeasts. The time sufficient to hydrate the cells was found below 10 hours. In the presence of liquid water an indication of reorganization of yeast in the molecular scale was observed. Microscopic images of the lyophilised, dried and wet yeasts with best possible resolution were obtained using Inverted Microscopy (IM) and Environmental Scanning Electron Microscopy (ESEM) methods. As a result visible changes to the surface of the cell membrane were observed in ESEM images. |
2210.03587 | Jason Steffener | Jason Steffener, Joanne Nicholls, Dylan Franklin | The Role of Lifetime Exposures across Cognitive Domains in Barbados
Using Data From the SABE Study | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | This study characterized the effects of aging on individual cognitive domains
and how sex, job type, and years of education alter the age effect on older
adults from Barbados. This was an analysis of the cross-sectional data
collected as part of the SABE Study (Health, Well-being and Ageing) in 2006.
The loss of a single point in each of the individual cognitive domains assessed
using the mini-mental state exam served as dependent variables. Independent
variables included age, sex, years of education, job type, and the interactions
with age in a series of logistic regression analyses. The study aimed to
identify which factors altered the effect of age on cognitive performance and
which directly affected performance. Results demonstrated that the effect of
age differed across the cognitive domains. In addition, sex, education, and job
type all differentially affected cognitive performance in an additive,
formative manner. The most consistent finding was that high years of education
coupled with employment requiring mostly mental effort was the best combination
for maintaining high levels of cognitive performance in late life. The results
demonstrate that adverse age effects on cognitive performance may be minimized
or delayed through modifiable lifetime exposures in the people of Barbados.
| [
{
"created": "Fri, 7 Oct 2022 14:45:17 GMT",
"version": "v1"
}
] | 2022-10-10 | [
[
"Steffener",
"Jason",
""
],
[
"Nicholls",
"Joanne",
""
],
[
"Franklin",
"Dylan",
""
]
] | This study characterized the effects of aging on individual cognitive domains and how sex, job type, and years of education alter the age effect on older adults from Barbados. This was an analysis of the cross-sectional data collected as part of the SABE Study (Health, Well-being and Ageing) in 2006. The loss of a single point in each of the individual cognitive domains assessed using the mini-mental state exam served as dependent variables. Independent variables included age, sex, years of education, job type, and the interactions with age in a series of logistic regression analyses. The study aimed to identify which factors altered the effect of age on cognitive performance and which directly affected performance. Results demonstrated that the effect of age differed across the cognitive domains. In addition, sex, education, and job type all differentially affected cognitive performance in an additive, formative manner. The most consistent finding was that high years of education coupled with employment requiring mostly mental effort was the best combination for maintaining high levels of cognitive performance in late life. The results demonstrate that adverse age effects on cognitive performance may be minimized or delayed through modifiable lifetime exposures in the people of Barbados. |
1805.04107 | Claire N\'edellec | Claire N\'edellec, Robert Bossy, Estelle Chaix, Louise Del\'eger | Text-mining and ontologies: new approaches to knowledge discovery of
microbial diversity | 5 pages | Proceedings of the 4th International Microbial Diversity
Conference. pp. 221-227, ed. Marco Gobetti. Pub. Simtra. ISBN
978-88-943010-0-7, Bari, October 2017 | null | null | q-bio.QM cs.AI | http://creativecommons.org/licenses/by/4.0/ | Microbiology research has access to a very large amount of public information
on the habitats of microorganisms. Many areas of microbiology research uses
this information, primarily in biodiversity studies. However the habitat
information is expressed in unstructured natural language form, which hinders
its exploitation at large-scale. It is very common for similar habitats to be
described by different terms, which makes them hard to compare automatically,
e.g. intestine and gut. The use of a common reference to standardize these
habitat descriptions as claimed by (Ivana et al., 2010) is a necessity. We
propose the ontology called OntoBiotope that we have been developing since
2010. The OntoBiotope ontology is in a formal machine-readable representation
that enables indexing of information as well as conceptualization and
reasoning.
| [
{
"created": "Thu, 10 May 2018 11:38:45 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Oct 2018 14:52:02 GMT",
"version": "v2"
}
] | 2018-11-01 | [
[
"Nédellec",
"Claire",
""
],
[
"Bossy",
"Robert",
""
],
[
"Chaix",
"Estelle",
""
],
[
"Deléger",
"Louise",
""
]
] | Microbiology research has access to a very large amount of public information on the habitats of microorganisms. Many areas of microbiology research uses this information, primarily in biodiversity studies. However the habitat information is expressed in unstructured natural language form, which hinders its exploitation at large-scale. It is very common for similar habitats to be described by different terms, which makes them hard to compare automatically, e.g. intestine and gut. The use of a common reference to standardize these habitat descriptions as claimed by (Ivana et al., 2010) is a necessity. We propose the ontology called OntoBiotope that we have been developing since 2010. The OntoBiotope ontology is in a formal machine-readable representation that enables indexing of information as well as conceptualization and reasoning. |
1210.4500 | Benjamin Good | Benjamin H Good and Michael M Desai | The equivalence between weak and strong purifying selection | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Weak purifying selection, acting on many linked mutations, may play a major
role in shaping patterns of molecular evolution in natural populations. Yet
efforts to infer these effects from DNA sequence data are limited by our
incomplete understanding of weak selection on local genomic scales. Here, we
demonstrate a natural symmetry between weak and strong selection, in which the
effects of many weakly selected mutations on patterns of molecular evolution
are equivalent to a smaller number of more strongly selected mutations. By
introducing a coarse-grained "effective selection coefficient," we derive an
explicit mapping between weakly selected populations and their strongly
selected counterparts, which allows us to make accurate and efficient
predictions across the full range of selection strengths. This suggests that an
effective selection coefficient and effective mutation rate --- not an
effective population size --- is the most accurate summary of the effects of
selection over locally linked regions. Moreover, this correspondence places
fundamental limits on our ability to resolve the effects of weak selection from
contemporary sequence data alone.
| [
{
"created": "Tue, 16 Oct 2012 17:16:42 GMT",
"version": "v1"
}
] | 2012-10-17 | [
[
"Good",
"Benjamin H",
""
],
[
"Desai",
"Michael M",
""
]
] | Weak purifying selection, acting on many linked mutations, may play a major role in shaping patterns of molecular evolution in natural populations. Yet efforts to infer these effects from DNA sequence data are limited by our incomplete understanding of weak selection on local genomic scales. Here, we demonstrate a natural symmetry between weak and strong selection, in which the effects of many weakly selected mutations on patterns of molecular evolution are equivalent to a smaller number of more strongly selected mutations. By introducing a coarse-grained "effective selection coefficient," we derive an explicit mapping between weakly selected populations and their strongly selected counterparts, which allows us to make accurate and efficient predictions across the full range of selection strengths. This suggests that an effective selection coefficient and effective mutation rate --- not an effective population size --- is the most accurate summary of the effects of selection over locally linked regions. Moreover, this correspondence places fundamental limits on our ability to resolve the effects of weak selection from contemporary sequence data alone. |
1702.04195 | Vadim Zotev | Vadim Zotev, Masaya Misaki, Raquel Phillips, Chung Ki Wong, Jerzy
Bodurka | Real-time fMRI neurofeedback of the mediodorsal and anterior thalamus
enhances correlation between thalamic BOLD activity and alpha EEG rhythm | 26 pages, 14 figures, to appear in Human Brain Mapping | Human Brain Mapping 39 (2018) 1024-1042 | 10.1002/hbm.23902 | null | q-bio.NC physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-time fMRI neurofeedback (rtfMRI-nf) with simultaneous EEG allows
volitional modulation of BOLD activity of target brain regions and
investigation of related electrophysiological activity. We applied this
approach to study correlations between thalamic BOLD activity and alpha EEG
rhythm. Healthy volunteers in the experimental group (EG, n=15) learned to
upregulate BOLD activity of the target region consisting of the mediodorsal
(MD) and anterior (AN) thalamic nuclei using the rtfMRI-nf during retrieval of
happy autobiographical memories. Healthy subjects in the control group (CG,
n=14) were provided with a sham feedback. The EG participants were able to
significantly increase BOLD activities of the MD and AN. Functional
connectivity between the MD and the inferior precuneus was significantly
enhanced during the rtfMRI-nf task. Average individual changes in the occipital
alpha EEG power significantly correlated with the average MD BOLD activity
levels for the EG. Temporal correlations between the occipital alpha EEG power
and BOLD activities of the MD and AN were significantly enhanced, during the
rtfMRI-nf task, for the EG compared to the CG. Temporal correlations with the
alpha power were also significantly enhanced for the posterior nodes of the
default mode network, including the precuneus/posterior cingulate, and for the
dorsal striatum. Our findings suggest that the temporal correlation between the
MD BOLD activity and posterior alpha EEG power is modulated by the interaction
between the MD and the inferior precuneus, reflected in their functional
connectivity. Our results demonstrate the potential of the rtfMRI-nf with
simultaneous EEG for non-invasive neuromodulation studies of human brain
function.
| [
{
"created": "Tue, 14 Feb 2017 13:25:23 GMT",
"version": "v1"
},
{
"created": "Sun, 19 Nov 2017 22:28:23 GMT",
"version": "v2"
}
] | 2018-01-15 | [
[
"Zotev",
"Vadim",
""
],
[
"Misaki",
"Masaya",
""
],
[
"Phillips",
"Raquel",
""
],
[
"Wong",
"Chung Ki",
""
],
[
"Bodurka",
"Jerzy",
""
]
] | Real-time fMRI neurofeedback (rtfMRI-nf) with simultaneous EEG allows volitional modulation of BOLD activity of target brain regions and investigation of related electrophysiological activity. We applied this approach to study correlations between thalamic BOLD activity and alpha EEG rhythm. Healthy volunteers in the experimental group (EG, n=15) learned to upregulate BOLD activity of the target region consisting of the mediodorsal (MD) and anterior (AN) thalamic nuclei using the rtfMRI-nf during retrieval of happy autobiographical memories. Healthy subjects in the control group (CG, n=14) were provided with a sham feedback. The EG participants were able to significantly increase BOLD activities of the MD and AN. Functional connectivity between the MD and the inferior precuneus was significantly enhanced during the rtfMRI-nf task. Average individual changes in the occipital alpha EEG power significantly correlated with the average MD BOLD activity levels for the EG. Temporal correlations between the occipital alpha EEG power and BOLD activities of the MD and AN were significantly enhanced, during the rtfMRI-nf task, for the EG compared to the CG. Temporal correlations with the alpha power were also significantly enhanced for the posterior nodes of the default mode network, including the precuneus/posterior cingulate, and for the dorsal striatum. Our findings suggest that the temporal correlation between the MD BOLD activity and posterior alpha EEG power is modulated by the interaction between the MD and the inferior precuneus, reflected in their functional connectivity. Our results demonstrate the potential of the rtfMRI-nf with simultaneous EEG for non-invasive neuromodulation studies of human brain function. |
0811.3163 | Stefan Klumpp | Stefan Klumpp and Terence Hwa | Stochasticity and traffic jams in the transcription of ribosomal RNA:
Intriguing role of termination and antitermination | includes Supporting Information | Proc. Natl. Acad. Sci. USA 105, 18159-18164 (2008) | 10.1073/pnas.0806084105 | null | q-bio.SC cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In fast growing bacteria, ribosomal RNA (rRNA) is required to be transcribed
at very high rates to sustain the high cellular demand on ribosome synthesis.
This results in dense traffic of RNA polymerases (RNAP). We developed a
stochastic model, integrating results of single-molecule and quantitative in
vivo studies of E. coli, to evaluate the quantitative effect of pausing,
termination, and antitermination on rRNA transcription. Our calculations reveal
that in dense RNAP traffic, spontaneous pausing of RNAP can lead to severe
"traffic jams", as manifested in the broad distribution of inter-RNAP distances
and can be a major factor limiting transcription and hence growth. Our results
suggest the suppression of these pauses by the ribosomal antitermination
complex to be essential at fast growth. Moreover, unsuppressed pausing by even
a few non-antiterminated RNAPs can already reduce transcription drastically
under dense traffic. However, the termination factor Rho can remove the
non-antiterminated RNAPs and restore fast transcription. The results thus
suggest an intriguing role by Rho to enhance rather than attenuate rRNA
transcription.
| [
{
"created": "Wed, 19 Nov 2008 17:49:52 GMT",
"version": "v1"
}
] | 2008-11-24 | [
[
"Klumpp",
"Stefan",
""
],
[
"Hwa",
"Terence",
""
]
] | In fast growing bacteria, ribosomal RNA (rRNA) is required to be transcribed at very high rates to sustain the high cellular demand on ribosome synthesis. This results in dense traffic of RNA polymerases (RNAP). We developed a stochastic model, integrating results of single-molecule and quantitative in vivo studies of E. coli, to evaluate the quantitative effect of pausing, termination, and antitermination on rRNA transcription. Our calculations reveal that in dense RNAP traffic, spontaneous pausing of RNAP can lead to severe "traffic jams", as manifested in the broad distribution of inter-RNAP distances and can be a major factor limiting transcription and hence growth. Our results suggest the suppression of these pauses by the ribosomal antitermination complex to be essential at fast growth. Moreover, unsuppressed pausing by even a few non-antiterminated RNAPs can already reduce transcription drastically under dense traffic. However, the termination factor Rho can remove the non-antiterminated RNAPs and restore fast transcription. The results thus suggest an intriguing role by Rho to enhance rather than attenuate rRNA transcription. |
2403.15523 | Jordy Thielen | H. A. Scheppink, S. Ahmadi, P. Desain, M. Tangermann, J. Thielen | Towards auditory attention decoding with noise-tagging: A pilot study | 6 pages, 2 figures, 9th Graz Brain-Computer Interface Conference 2024 | null | null | null | q-bio.NC cs.AI cs.LG cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | Auditory attention decoding (AAD) aims to extract from brain activity the
attended speaker amidst candidate speakers, offering promising applications for
neuro-steered hearing devices and brain-computer interfacing. This pilot study
makes a first step towards AAD using the noise-tagging stimulus protocol, which
evokes reliable code-modulated evoked potentials, but is minimally explored in
the auditory modality. Participants were sequentially presented with two Dutch
speech stimuli that were amplitude-modulated with a unique binary pseudo-random
noise-code, effectively tagging these with additional decodable information. We
compared the decoding of unmodulated audio against audio modulated with various
modulation depths, and a conventional AAD method against a standard method to
decode noise-codes. Our pilot study revealed higher performances for the
conventional method with 70 to 100 percent modulation depths compared to
unmodulated audio. The noise-code decoder did not further improve these
results. These fundamental insights highlight the potential of integrating
noise-codes in speech to enhance auditory speaker detection when multiple
speakers are presented simultaneously.
| [
{
"created": "Fri, 22 Mar 2024 13:35:34 GMT",
"version": "v1"
},
{
"created": "Fri, 17 May 2024 14:44:24 GMT",
"version": "v2"
}
] | 2024-05-20 | [
[
"Scheppink",
"H. A.",
""
],
[
"Ahmadi",
"S.",
""
],
[
"Desain",
"P.",
""
],
[
"Tangermann",
"M.",
""
],
[
"Thielen",
"J.",
""
]
] | Auditory attention decoding (AAD) aims to extract from brain activity the attended speaker amidst candidate speakers, offering promising applications for neuro-steered hearing devices and brain-computer interfacing. This pilot study makes a first step towards AAD using the noise-tagging stimulus protocol, which evokes reliable code-modulated evoked potentials, but is minimally explored in the auditory modality. Participants were sequentially presented with two Dutch speech stimuli that were amplitude-modulated with a unique binary pseudo-random noise-code, effectively tagging these with additional decodable information. We compared the decoding of unmodulated audio against audio modulated with various modulation depths, and a conventional AAD method against a standard method to decode noise-codes. Our pilot study revealed higher performances for the conventional method with 70 to 100 percent modulation depths compared to unmodulated audio. The noise-code decoder did not further improve these results. These fundamental insights highlight the potential of integrating noise-codes in speech to enhance auditory speaker detection when multiple speakers are presented simultaneously. |
1801.09858 | Xiaoxiao Wang | Xiaoxiao Wang, Xiao Liang, Zhoufan Jiang, Benedictor Alexander Nguchu,
Yawen Zhou, Yanming Wang, Huijuan Wang, Yu Li, Yuying Zhu, Feng Wu, Jia-Hong
Gao, Benching Qiu | Decoding and mapping task states of the human brain via deep learning | 27 pages, 8 figures, 4 table | null | 10.1002/hbm.24891 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Support vector machine (SVM) based multivariate pattern analysis (MVPA) has
delivered promising performance in decoding specific task states based on
functional magnetic resonance imaging (fMRI) of the human brain.
Conventionally, the SVM-MVPA requires careful feature selection/extraction
according to expert knowledge. In this study, we propose a deep neural network
(DNN) for directly decoding multiple brain task states from fMRI signals of the
brain without any burden for feature handcrafts. We trained and tested the DNN
classifier using task fMRI data from the Human Connectome Project's S1200
dataset (N=1034). In tests to verify its performance, the proposed
classification method identified seven tasks with an average accuracy of 93.7%.
We also showed the general applicability of the DNN for transfer learning to
small datasets (N=43), a situation encountered in typical neuroscience
research. The proposed method achieved an average accuracy of 89.0% and 94.7%
on a working memory task and a motor classification task, respectively, higher
than the accuracy of 69.2% and 68.6% obtained by the SVM-MVPA. A network
visualization analysis showed that the DNN automatically detected features from
areas of the brain related to each task. Without incurring the burden of
handcrafting the features, the proposed deep decoding method can classify brain
task states highly accurately, and is a powerful tool for fMRI researchers.
| [
{
"created": "Tue, 30 Jan 2018 05:58:18 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Sep 2019 03:27:16 GMT",
"version": "v2"
},
{
"created": "Wed, 4 Dec 2019 06:40:15 GMT",
"version": "v3"
}
] | 2019-12-17 | [
[
"Wang",
"Xiaoxiao",
""
],
[
"Liang",
"Xiao",
""
],
[
"Jiang",
"Zhoufan",
""
],
[
"Nguchu",
"Benedictor Alexander",
""
],
[
"Zhou",
"Yawen",
""
],
[
"Wang",
"Yanming",
""
],
[
"Wang",
"Huijuan",
""
],
[
"Li",
"Yu",
""
],
[
"Zhu",
"Yuying",
""
],
[
"Wu",
"Feng",
""
],
[
"Gao",
"Jia-Hong",
""
],
[
"Qiu",
"Benching",
""
]
] | Support vector machine (SVM) based multivariate pattern analysis (MVPA) has delivered promising performance in decoding specific task states based on functional magnetic resonance imaging (fMRI) of the human brain. Conventionally, the SVM-MVPA requires careful feature selection/extraction according to expert knowledge. In this study, we propose a deep neural network (DNN) for directly decoding multiple brain task states from fMRI signals of the brain without any burden for feature handcrafts. We trained and tested the DNN classifier using task fMRI data from the Human Connectome Project's S1200 dataset (N=1034). In tests to verify its performance, the proposed classification method identified seven tasks with an average accuracy of 93.7%. We also showed the general applicability of the DNN for transfer learning to small datasets (N=43), a situation encountered in typical neuroscience research. The proposed method achieved an average accuracy of 89.0% and 94.7% on a working memory task and a motor classification task, respectively, higher than the accuracy of 69.2% and 68.6% obtained by the SVM-MVPA. A network visualization analysis showed that the DNN automatically detected features from areas of the brain related to each task. Without incurring the burden of handcrafting the features, the proposed deep decoding method can classify brain task states highly accurately, and is a powerful tool for fMRI researchers. |
2104.08474 | Yibo Li | Yibo Li, Jianfeng Pei and Luhua Lai | Learning to design drug-like molecules in three-dimensional space using
deep generative models | null | Chem. Sci. (2021) | 10.1039/D1SC04444C | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, deep generative models for molecular graphs are gaining more and
more attention in the field of de novo drug design. A variety of models have
been developed to generate topological structures of drug-like molecules, but
explorations in generating three-dimensional structures are still limited.
Existing methods have either focused on low molecular weight compounds without
considering drug-likeness or generate 3D structures indirectly using atom
density maps. In this work, we introduce Ligand Neural Network (L-Net), a novel
graph generative model for designing drug-like molecules with high-quality 3D
structures. L-Net directly outputs the topological and 3D structure of
molecules (including hydrogen atoms), without the need for additional atom
placement or bond order inference algorithm. The architecture of L-Net is
specifically optimized for drug-like molecules, and a set of metrics is
assembled to comprehensively evaluate its performance. The results show that
L-Net is capable of generating chemically correct, conformationally valid, and
highly druglike molecules. Finally, to demonstrate its potential in
structure-based molecular design, we combine L-Net with MCTS and test its
ability to generate potential inhibitors targeting ABL1 kinase.
| [
{
"created": "Sat, 17 Apr 2021 07:30:23 GMT",
"version": "v1"
}
] | 2021-09-16 | [
[
"Li",
"Yibo",
""
],
[
"Pei",
"Jianfeng",
""
],
[
"Lai",
"Luhua",
""
]
] | Recently, deep generative models for molecular graphs are gaining more and more attention in the field of de novo drug design. A variety of models have been developed to generate topological structures of drug-like molecules, but explorations in generating three-dimensional structures are still limited. Existing methods have either focused on low molecular weight compounds without considering drug-likeness or generate 3D structures indirectly using atom density maps. In this work, we introduce Ligand Neural Network (L-Net), a novel graph generative model for designing drug-like molecules with high-quality 3D structures. L-Net directly outputs the topological and 3D structure of molecules (including hydrogen atoms), without the need for additional atom placement or bond order inference algorithm. The architecture of L-Net is specifically optimized for drug-like molecules, and a set of metrics is assembled to comprehensively evaluate its performance. The results show that L-Net is capable of generating chemically correct, conformationally valid, and highly druglike molecules. Finally, to demonstrate its potential in structure-based molecular design, we combine L-Net with MCTS and test its ability to generate potential inhibitors targeting ABL1 kinase. |
1703.10524 | Dmitry Ravcheev PhD | Dmitry A. Ravcheev, Ines Thiele | Comparative genomic analysis of the human gut microbiome reveals a broad
distribution of metabolic pathways for the degradation of host-synthetized
mucin glycans | 28 pages, 5 figures | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The colonic mucus layer is a dynamic and complex structure formed by secreted
and transmembrane mucins, which are high-molecular-weight and heavily
glycosylated proteins. Colonic mucus consists of a loose outer layer and a
dense epithelium-attached layer. The outer layer is inhabited by various
representatives of the human gut microbiota (HGM). Glycans of the colonic mucus
can be used by the HGM as a source of carbon and energy when dietary fibers are
not sufficiently available. Here, we analyzed 397 individual HGM genomes to
identify pathways for the cleavage of host-synthetized mucin glycans to
monosaccharides as well as for the catabolism of the derived monosaccharides.
Our key results are as follows: (i) Genes for the cleavage of mucin glycans
were found in 86% of the analyzed genomes, whereas genes for the catabolism of
derived monosaccharides were found in 89% of the analyzed genomes. (ii)
Comparative genomic analysis identified four alternative forms of the
monosaccharide-catabolizing enzymes and four alternative forms of
monosaccharide transporters. (iii) Eighty-five percent of the analyzed genomes
may be involved in exchange pathways for the monosaccharides derived from
cleaved mucin glycans. (iv) The analyzed genomes demonstrated different
abilities to degrade known mucin glycans. Generally, the ability to degrade at
least one type of mucin glycan was predicted for 81% of the analyzed genomes.
(v) Eighty-two percent of the analyzed genomes can form mutualistic pairs that
are able to degrade mucin glycans and are not degradable by any of the paired
organisms alone. Taken together, these findings provide further insight into
the inter-microbial communications of the HGM as well as into host-HGM
interactions.
| [
{
"created": "Thu, 30 Mar 2017 15:25:13 GMT",
"version": "v1"
}
] | 2017-03-31 | [
[
"Ravcheev",
"Dmitry A.",
""
],
[
"Thiele",
"Ines",
""
]
] | The colonic mucus layer is a dynamic and complex structure formed by secreted and transmembrane mucins, which are high-molecular-weight and heavily glycosylated proteins. Colonic mucus consists of a loose outer layer and a dense epithelium-attached layer. The outer layer is inhabited by various representatives of the human gut microbiota (HGM). Glycans of the colonic mucus can be used by the HGM as a source of carbon and energy when dietary fibers are not sufficiently available. Here, we analyzed 397 individual HGM genomes to identify pathways for the cleavage of host-synthetized mucin glycans to monosaccharides as well as for the catabolism of the derived monosaccharides. Our key results are as follows: (i) Genes for the cleavage of mucin glycans were found in 86% of the analyzed genomes, whereas genes for the catabolism of derived monosaccharides were found in 89% of the analyzed genomes. (ii) Comparative genomic analysis identified four alternative forms of the monosaccharide-catabolizing enzymes and four alternative forms of monosaccharide transporters. (iii) Eighty-five percent of the analyzed genomes may be involved in exchange pathways for the monosaccharides derived from cleaved mucin glycans. (iv) The analyzed genomes demonstrated different abilities to degrade known mucin glycans. Generally, the ability to degrade at least one type of mucin glycan was predicted for 81% of the analyzed genomes. (v) Eighty-two percent of the analyzed genomes can form mutualistic pairs that are able to degrade mucin glycans and are not degradable by any of the paired organisms alone. Taken together, these findings provide further insight into the inter-microbial communications of the HGM as well as into host-HGM interactions. |
1201.0246 | Yu Wu | Yu Wu, Wenlian Lu, Wei Lin, Gareth Leng, Jianfeng Feng | Bifurcations of Emergent Bursting in a Neuronal Network | 7 figures, 1 table | null | 10.1371/journal.pone.0038402 | null | q-bio.QM math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Currently we routinely develop a complex neuronal network to explain observed
but often paradoxical phenomena based upon biological recordings. Here we
present a general approach to demonstrate how to mathematically tackle such a
complex neuronal network so that we can fully understand the underlying
mechanism. Using an oxytocin network developed earlier as an example, we show
how we can reduce a complex model with many variables to a tractable model with
two variables, while retaining all key qualitative features of the model. The
approach enables us to uncover how emergent synchronous bursting could arise
from a neuronal network which embodies all known biological features.
Surprisingly, the discovered mechanisms for bursting are similar to those found
in other systems reported in the literature, and illustrate a generic way to
exhibit emergent and multi-time scale spikes: at the membrane potential level
and the firing rate level.
| [
{
"created": "Sat, 31 Dec 2011 10:59:56 GMT",
"version": "v1"
}
] | 2015-06-03 | [
[
"Wu",
"Yu",
""
],
[
"Lu",
"Wenlian",
""
],
[
"Lin",
"Wei",
""
],
[
"Leng",
"Gareth",
""
],
[
"Feng",
"Jianfeng",
""
]
] | Currently we routinely develop a complex neuronal network to explain observed but often paradoxical phenomena based upon biological recordings. Here we present a general approach to demonstrate how to mathematically tackle such a complex neuronal network so that we can fully understand the underlying mechanism. Using an oxytocin network developed earlier as an example, we show how we can reduce a complex model with many variables to a tractable model with two variables, while retaining all key qualitative features of the model. The approach enables us to uncover how emergent synchronous bursting could arise from a neuronal network which embodies all known biological features. Surprisingly, the discovered mechanisms for bursting are similar to those found in other systems reported in the literature, and illustrate a generic way to exhibit emergent and multi-time scale spikes: at the membrane potential level and the firing rate level. |
1510.00658 | Khem Raj Ghusinga | Khem Raj Ghusinga, Abhyudai Singh | Optimal regulation of protein degradation to schedule cellular events
with precision | null | null | 10.1109/ACC.2016.7524951 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An important occurrence in many cellular contexts is the crossing of a
prescribed threshold by a regulatory protein. The timing of such events is
stochastic as a consequence of the innate randomness in gene expression. A
question of interest is to understand how gene expression is regulated to
achieve precision in event timing. To address this, we model event timing using
the first-passage time framework - a mathe- matical tool to analyze the time
when a stochastic process first crosses a specific threshold. The protein
evolution is described via a simple stochastic model of gene expression.
Moreover, we consider the feedback regulation of protein degradation to be a
possible noise control mechanism employed to achieve the precision. Exact
analytical formulas are developed for the distribution and moments of the
first-passage time. Using these expressions, we investigate for the optimal
feedback strategy such that noise (coefficient of variation squared) in event
timing is minimized around a given fixed mean time. Our results show that the
minimum noise is achieved when the protein degradation rate is zero for all
protein levels. Lastly, the implications of this finding are discussed.
| [
{
"created": "Fri, 2 Oct 2015 17:39:49 GMT",
"version": "v1"
}
] | 2017-02-24 | [
[
"Ghusinga",
"Khem Raj",
""
],
[
"Singh",
"Abhyudai",
""
]
] | An important occurrence in many cellular contexts is the crossing of a prescribed threshold by a regulatory protein. The timing of such events is stochastic as a consequence of the innate randomness in gene expression. A question of interest is to understand how gene expression is regulated to achieve precision in event timing. To address this, we model event timing using the first-passage time framework - a mathe- matical tool to analyze the time when a stochastic process first crosses a specific threshold. The protein evolution is described via a simple stochastic model of gene expression. Moreover, we consider the feedback regulation of protein degradation to be a possible noise control mechanism employed to achieve the precision. Exact analytical formulas are developed for the distribution and moments of the first-passage time. Using these expressions, we investigate for the optimal feedback strategy such that noise (coefficient of variation squared) in event timing is minimized around a given fixed mean time. Our results show that the minimum noise is achieved when the protein degradation rate is zero for all protein levels. Lastly, the implications of this finding are discussed. |
1203.2178 | Valery Kirzhner | Valery Kirzhner and Zeev Volkovich | Evaluation of the Genome Mixture Contents by Means of the Compositional
Spectra Method | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this research, we consider a mixture of genome fragments of a certain
bacteria set. The problem of mixture separation is studied under the assumption
that all the genomes present in the mixture are completely sequenced or are
close to those already sequenced. Such assumption is relevant, e.g., in regular
observations of ecological or biomedical objects, where the possible set of
microorganisms is known and it is only necessary to follow their
concentrations.
| [
{
"created": "Fri, 9 Mar 2012 20:57:14 GMT",
"version": "v1"
}
] | 2012-03-12 | [
[
"Kirzhner",
"Valery",
""
],
[
"Volkovich",
"Zeev",
""
]
] | In this research, we consider a mixture of genome fragments of a certain bacteria set. The problem of mixture separation is studied under the assumption that all the genomes present in the mixture are completely sequenced or are close to those already sequenced. Such assumption is relevant, e.g., in regular observations of ecological or biomedical objects, where the possible set of microorganisms is known and it is only necessary to follow their concentrations. |
2402.04845 | Bowen Jing | Bowen Jing, Bonnie Berger, Tommi Jaakkola | AlphaFold Meets Flow Matching for Generating Protein Ensembles | null | null | null | null | q-bio.BM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The biological functions of proteins often depend on dynamic structural
ensembles. In this work, we develop a flow-based generative modeling approach
for learning and sampling the conformational landscapes of proteins. We
repurpose highly accurate single-state predictors such as AlphaFold and ESMFold
and fine-tune them under a custom flow matching framework to obtain
sequence-conditoned generative models of protein structure called AlphaFlow and
ESMFlow. When trained and evaluated on the PDB, our method provides a superior
combination of precision and diversity compared to AlphaFold with MSA
subsampling. When further trained on ensembles from all-atom MD, our method
accurately captures conformational flexibility, positional distributions, and
higher-order ensemble observables for unseen proteins. Moreover, our method can
diversify a static PDB structure with faster wall-clock convergence to certain
equilibrium properties than replicate MD trajectories, demonstrating its
potential as a proxy for expensive physics-based simulations. Code is available
at https://github.com/bjing2016/alphaflow.
| [
{
"created": "Wed, 7 Feb 2024 13:44:47 GMT",
"version": "v1"
}
] | 2024-02-08 | [
[
"Jing",
"Bowen",
""
],
[
"Berger",
"Bonnie",
""
],
[
"Jaakkola",
"Tommi",
""
]
] | The biological functions of proteins often depend on dynamic structural ensembles. In this work, we develop a flow-based generative modeling approach for learning and sampling the conformational landscapes of proteins. We repurpose highly accurate single-state predictors such as AlphaFold and ESMFold and fine-tune them under a custom flow matching framework to obtain sequence-conditoned generative models of protein structure called AlphaFlow and ESMFlow. When trained and evaluated on the PDB, our method provides a superior combination of precision and diversity compared to AlphaFold with MSA subsampling. When further trained on ensembles from all-atom MD, our method accurately captures conformational flexibility, positional distributions, and higher-order ensemble observables for unseen proteins. Moreover, our method can diversify a static PDB structure with faster wall-clock convergence to certain equilibrium properties than replicate MD trajectories, demonstrating its potential as a proxy for expensive physics-based simulations. Code is available at https://github.com/bjing2016/alphaflow. |
1303.7439 | Erwan Bigan | Erwan Bigan, Jean-Marc Steyaert and St\'ephane Douady | Properties of Random Complex Chemical Reaction Networks and Their
Relevance to Biological Toy Models | null | null | null | Proceedings of Journ\'ees Ouvertes Biologie Informatique
Math\'ematiques (JOBIM), Toulouse, France, July 1-4, 2013 | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the properties of large random conservative chemical reaction
networks composed of elementary reactions endowed with either mass-action or
saturating kinetics, assigning kinetic parameters in a
thermodynamically-consistent manner. We find that such complex networks exhibit
qualitatively similar behavior when fed with external nutrient flux. The
nutrient is preferentially transformed into one specific chemical that is an
intrinsic property of the network. We propose a self-consistent proto-cell toy
model in which the preferentially synthesized chemical is a precursor for the
cell membrane, and show that such proto-cells can exhibit sustainable
homeostatic growth when fed with any nutrient diffusing through the membrane,
provided that nutrient is metabolized at a sufficient rate.
| [
{
"created": "Fri, 29 Mar 2013 17:08:05 GMT",
"version": "v1"
}
] | 2014-11-26 | [
[
"Bigan",
"Erwan",
""
],
[
"Steyaert",
"Jean-Marc",
""
],
[
"Douady",
"Stéphane",
""
]
] | We investigate the properties of large random conservative chemical reaction networks composed of elementary reactions endowed with either mass-action or saturating kinetics, assigning kinetic parameters in a thermodynamically-consistent manner. We find that such complex networks exhibit qualitatively similar behavior when fed with external nutrient flux. The nutrient is preferentially transformed into one specific chemical that is an intrinsic property of the network. We propose a self-consistent proto-cell toy model in which the preferentially synthesized chemical is a precursor for the cell membrane, and show that such proto-cells can exhibit sustainable homeostatic growth when fed with any nutrient diffusing through the membrane, provided that nutrient is metabolized at a sufficient rate. |
1502.06256 | Gregory Kucherov | Karel Brinda and Maciej Sykulski and Gregory Kucherov | Spaced seeds improve k-mer-based metagenomic classification | 23 pages | Bioinformatics (2015) 31 (22): 3584-3592 | 10.1093/bioinformatics/btv419 | null | q-bio.GN cs.CE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Metagenomics is a powerful approach to study genetic content of environmental
samples that has been strongly promoted by NGS technologies. To cope with
massive data involved in modern metagenomic projects, recent tools [4, 39] rely
on the analysis of k-mers shared between the read to be classified and sampled
reference genomes. Within this general framework, we show in this work that
spaced seeds provide a significant improvement of classification accuracy as
opposed to traditional contiguous k-mers. We support this thesis through a
series a different computational experiments, including simulations of
large-scale metagenomic projects. Scripts and programs used in this study, as
well as supplementary material, are available from
http://github.com/gregorykucherov/spaced-seeds-for-metagenomics.
| [
{
"created": "Sun, 22 Feb 2015 18:30:58 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Mar 2015 18:25:54 GMT",
"version": "v2"
},
{
"created": "Thu, 9 Jul 2015 09:47:00 GMT",
"version": "v3"
}
] | 2016-03-17 | [
[
"Brinda",
"Karel",
""
],
[
"Sykulski",
"Maciej",
""
],
[
"Kucherov",
"Gregory",
""
]
] | Metagenomics is a powerful approach to study genetic content of environmental samples that has been strongly promoted by NGS technologies. To cope with massive data involved in modern metagenomic projects, recent tools [4, 39] rely on the analysis of k-mers shared between the read to be classified and sampled reference genomes. Within this general framework, we show in this work that spaced seeds provide a significant improvement of classification accuracy as opposed to traditional contiguous k-mers. We support this thesis through a series a different computational experiments, including simulations of large-scale metagenomic projects. Scripts and programs used in this study, as well as supplementary material, are available from http://github.com/gregorykucherov/spaced-seeds-for-metagenomics. |
2210.09092 | Moo K. Chung | Moo K. Chung, Soumya Das, Hernando Ombao | Dynamic Topological Data Analysis of Functional Human Brain Networks | In press in journal Foundations of Data Science | null | null | null | q-bio.NC nlin.CD | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Developing reliable methods to discriminate different transient brain states
that change over time is a key neuroscientific challenge in brain imaging
studies. Topological data analysis (TDA), a novel framework based on algebraic
topology, can handle such a challenge. However, existing TDA has been somewhat
limited to capturing the static summary of dynamically changing brain networks.
We propose a novel dynamic-TDA framework that builds persistent homology over a
time series of brain networks. We construct a Wasserstein distance based
inference procedure to discriminate between time series of networks. The method
is applied to the resting-state functional magnetic resonance images of human
brain. We demonstrate that our proposed dynamic-TDA approach can distinctly
discriminate between the topological patterns of male and female brain
networks. MATLAB code for implementing this method is available at
https://github.com/laplcebeltrami/PH-STAT.
| [
{
"created": "Mon, 17 Oct 2022 13:36:00 GMT",
"version": "v1"
},
{
"created": "Mon, 30 Oct 2023 10:25:58 GMT",
"version": "v2"
},
{
"created": "Sun, 10 Dec 2023 20:33:53 GMT",
"version": "v3"
},
{
"created": "Mon, 18 Dec 2023 09:33:53 GMT",
"version": "v4"
}
] | 2023-12-19 | [
[
"Chung",
"Moo K.",
""
],
[
"Das",
"Soumya",
""
],
[
"Ombao",
"Hernando",
""
]
] | Developing reliable methods to discriminate different transient brain states that change over time is a key neuroscientific challenge in brain imaging studies. Topological data analysis (TDA), a novel framework based on algebraic topology, can handle such a challenge. However, existing TDA has been somewhat limited to capturing the static summary of dynamically changing brain networks. We propose a novel dynamic-TDA framework that builds persistent homology over a time series of brain networks. We construct a Wasserstein distance based inference procedure to discriminate between time series of networks. The method is applied to the resting-state functional magnetic resonance images of human brain. We demonstrate that our proposed dynamic-TDA approach can distinctly discriminate between the topological patterns of male and female brain networks. MATLAB code for implementing this method is available at https://github.com/laplcebeltrami/PH-STAT. |
1710.11296 | Yury Garcia | Yury E. Garc\'ia, Marcos A. Capistr\'an | Early pathogen replacement in a model of Influenza and Respiratory
Syncytial Virus with partial vaccination. A computational study | null | null | null | null | q-bio.PE physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we carry out a computational study using the spectral
decomposition of the fluctuations of a two-pathogen epidemic model around its
deterministic attractor, i.e., steady state or limit cycle, to examine the role
of partial vaccination and between-host pathogen interaction on early pathogen
replacement during seasonal epidemics of influenza and respiratory syncytial
virus.
| [
{
"created": "Tue, 31 Oct 2017 01:57:32 GMT",
"version": "v1"
}
] | 2017-11-01 | [
[
"García",
"Yury E.",
""
],
[
"Capistrán",
"Marcos A.",
""
]
] | In this paper, we carry out a computational study using the spectral decomposition of the fluctuations of a two-pathogen epidemic model around its deterministic attractor, i.e., steady state or limit cycle, to examine the role of partial vaccination and between-host pathogen interaction on early pathogen replacement during seasonal epidemics of influenza and respiratory syncytial virus. |
1307.4732 | Xiao-Jun Tian | Xiao-Jun Tian and Hang Zhang and Jianhua Xing | Coupled Reversible and Irreversible Bistable Switches Underlying
TGF-\beta-induced Epithelial to Mesenchymal Transition | 32 pages, 8 figures, accepted by Biophysical Journal | Biophys J 105, 1079 (2013) | 10.1016/j.bpj.2013.07.011 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Epithelial to mesenchymal transition (EMT) plays important roles in embryonic
development, tissue regeneration and cancer metastasis. While several feedback
loops have been shown to regulate EMT, it remains elusive how they coordinately
modulate EMT response to TGF-\beta treatment. We construct a mathematical model
for the core regulatory network controlling TGF-\beta-induced EMT. Through
deterministic analyses and stochastic simulations, we show that EMT is a
sequential two-step program that an epithelial cell first transits to partial
EMT then to the mesenchymal state, depending on the strength and duration of
TGF-\beta stimulation. Mechanistically the system is governed by coupled
reversible and irreversible bistable switches. The SNAIL1/miR-34 double
negative feedback loop is responsible for the reversible switch and regulates
the initiation of EMT, while the ZEB/miR-200 feedback loop is accountable for
the irreversible switch and controls the establishment of the mesenchymal
state. Furthermore, an autocrine TGF-\beta/miR-200 feedback loop makes the
second switch irreversible, modulating the maintenance of EMT. Such coupled
bistable switches are robust to parameter variation and molecular noise. We
provide a mechanistic explanation on multiple experimental observations. The
model makes several explicit predictions on hysteretic dynamic behaviors,
system response to pulsed stimulation and various perturbations, which can be
straightforwardly tested.
| [
{
"created": "Wed, 17 Jul 2013 19:18:42 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Jul 2013 02:00:01 GMT",
"version": "v2"
},
{
"created": "Fri, 19 Jul 2013 01:12:49 GMT",
"version": "v3"
},
{
"created": "Mon, 22 Jul 2013 20:23:38 GMT",
"version": "v4"
},
{
"created": "Thu, 25 Jul 2013 15:25:32 GMT",
"version": "v5"
}
] | 2017-07-26 | [
[
"Tian",
"Xiao-Jun",
""
],
[
"Zhang",
"Hang",
""
],
[
"Xing",
"Jianhua",
""
]
] | Epithelial to mesenchymal transition (EMT) plays important roles in embryonic development, tissue regeneration and cancer metastasis. While several feedback loops have been shown to regulate EMT, it remains elusive how they coordinately modulate EMT response to TGF-\beta treatment. We construct a mathematical model for the core regulatory network controlling TGF-\beta-induced EMT. Through deterministic analyses and stochastic simulations, we show that EMT is a sequential two-step program that an epithelial cell first transits to partial EMT then to the mesenchymal state, depending on the strength and duration of TGF-\beta stimulation. Mechanistically the system is governed by coupled reversible and irreversible bistable switches. The SNAIL1/miR-34 double negative feedback loop is responsible for the reversible switch and regulates the initiation of EMT, while the ZEB/miR-200 feedback loop is accountable for the irreversible switch and controls the establishment of the mesenchymal state. Furthermore, an autocrine TGF-\beta/miR-200 feedback loop makes the second switch irreversible, modulating the maintenance of EMT. Such coupled bistable switches are robust to parameter variation and molecular noise. We provide a mechanistic explanation on multiple experimental observations. The model makes several explicit predictions on hysteretic dynamic behaviors, system response to pulsed stimulation and various perturbations, which can be straightforwardly tested. |
1905.00334 | Johannes M\"uller | Burkhard A. Hense, Matthew McIntosh, Johannes M\"uller, Martin
Schuster | Homogeneous and Heterogeneous Response of Quorum-Sensing Bacteria in an
Evolutionary Context | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To explain the stability of cooperation is a central task of evolutionary
theory. We investigate this question in the case of quorum sensing (QS)
bacteria, which regulate cooperative traits in response to population density.
Cooperation is modeled by the prisoner's dilemma, where individuals produce a
costly public good (PG) that equally benefits all members of a community
divided into multiple, distinct patches (multilevel selection). Cost and
benefit are non-linear functions of the PG production. The analysis of
evolutionary stability yields an optimization problem for the expression of PG
in dependency on the number of QS individuals within a colony. We find that the
optimal total PG production of the QS population mainly depends on the shape of
the benefit. A graded and a switch-like response is possible, in accordance
with earlier results. Interestingly, at the level of the individual cell, the
QS response is determined by the shape of the costs. All QS individuals respond
either homogeneously if cost are a convex function of the PG production rate,
or they respond heterogeneously with distinct ON/OFF responses if the costs are
concave. The latter finding is consistent with recent experimental findings,
and contradicts the usual interpretation of QS as a mechanism to establish a
uniform, synchronized response of a bacterial population.
| [
{
"created": "Wed, 1 May 2019 14:47:05 GMT",
"version": "v1"
}
] | 2019-05-02 | [
[
"Hense",
"Burkhard A.",
""
],
[
"McIntosh",
"Matthew",
""
],
[
"Müller",
"Johannes",
""
],
[
"Schuster",
"Martin",
""
]
] | To explain the stability of cooperation is a central task of evolutionary theory. We investigate this question in the case of quorum sensing (QS) bacteria, which regulate cooperative traits in response to population density. Cooperation is modeled by the prisoner's dilemma, where individuals produce a costly public good (PG) that equally benefits all members of a community divided into multiple, distinct patches (multilevel selection). Cost and benefit are non-linear functions of the PG production. The analysis of evolutionary stability yields an optimization problem for the expression of PG in dependency on the number of QS individuals within a colony. We find that the optimal total PG production of the QS population mainly depends on the shape of the benefit. A graded and a switch-like response is possible, in accordance with earlier results. Interestingly, at the level of the individual cell, the QS response is determined by the shape of the costs. All QS individuals respond either homogeneously if cost are a convex function of the PG production rate, or they respond heterogeneously with distinct ON/OFF responses if the costs are concave. The latter finding is consistent with recent experimental findings, and contradicts the usual interpretation of QS as a mechanism to establish a uniform, synchronized response of a bacterial population. |
q-bio/0309021 | V. Krishnan Ramanujan | R.V.Krishnan, H.Saitoh, H.Terada, V.E.Centonze and B.Herman | Development of a Multiphoton Fluorescence Lifetime Imaging Microscopy
(FLIM) system using a Streak Camera | null | Review of Scientific Instruments, Vol.74, 2714 (May 2003) | 10.1063/1.1569410 | null | q-bio.QM | null | We report the development and detailed calibration of a multiphoton
fluorescence lifetime imaging system (FLIM) using a streak camera. The present
system is versatile with high spatial (0.2 micron) and temporal (50 psec)
resolution and allows rapid data acquisition and reliable and reproducible
lifetime determinations. The system was calibrated with standard fluorescent
dyes and the lifetime values obtained were in very good agreement with values
reported in literature for these dyes. We also demonstrate the applicability of
the system to FLIM studies in cellular specimens including stained pollen
grains and fibroblast cells expressing green fluorescent protein. The lifetime
values obtained matched well with those reported earlier by other groups for
these same specimens. Potential applications of the present system include the
measurement of intracellular physiology and Fluorescence Resonance Energy
Transfer (FRET) imaging which are discussed in the context of live cell
imaging.
| [
{
"created": "Tue, 30 Sep 2003 14:15:15 GMT",
"version": "v1"
}
] | 2009-11-10 | [
[
"Krishnan",
"R. V.",
""
],
[
"Saitoh",
"H.",
""
],
[
"Terada",
"H.",
""
],
[
"Centonze",
"V. E.",
""
],
[
"Herman",
"B.",
""
]
] | We report the development and detailed calibration of a multiphoton fluorescence lifetime imaging system (FLIM) using a streak camera. The present system is versatile with high spatial (0.2 micron) and temporal (50 psec) resolution and allows rapid data acquisition and reliable and reproducible lifetime determinations. The system was calibrated with standard fluorescent dyes and the lifetime values obtained were in very good agreement with values reported in literature for these dyes. We also demonstrate the applicability of the system to FLIM studies in cellular specimens including stained pollen grains and fibroblast cells expressing green fluorescent protein. The lifetime values obtained matched well with those reported earlier by other groups for these same specimens. Potential applications of the present system include the measurement of intracellular physiology and Fluorescence Resonance Energy Transfer (FRET) imaging which are discussed in the context of live cell imaging. |
2403.07670 | Magnus Richardson | Magnus J E Richardson | Linear and non-linear integrate-and-fire neurons driven by synaptic shot
noise with reversal potentials | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The steady-state firing rate and firing-rate response of the leaky and
exponential integrate-and-fire models receiving synaptic shot noise with
excitatory and inhibitory reversal potentials is examined. For the particular
case where the underlying synaptic conductances are exponentially distributed,
it is shown that the master equation for a population of such model neurons can
be reduced from an integro-differential form to a more tractable set of three
differential equations. The system is nevertheless more challenging
analytically than for current-based synapses: where possible analytical results
are provided with an efficient numerical scheme and code provided for other
quantities. The increased tractability of the framework developed supports an
ongoing critical comparison between models in which synapses are treated with
and without reversal potentials, such as recently in the context of networks
with balanced excitatory and inhibitory conductances.
| [
{
"created": "Tue, 12 Mar 2024 14:02:15 GMT",
"version": "v1"
}
] | 2024-03-13 | [
[
"Richardson",
"Magnus J E",
""
]
] | The steady-state firing rate and firing-rate response of the leaky and exponential integrate-and-fire models receiving synaptic shot noise with excitatory and inhibitory reversal potentials is examined. For the particular case where the underlying synaptic conductances are exponentially distributed, it is shown that the master equation for a population of such model neurons can be reduced from an integro-differential form to a more tractable set of three differential equations. The system is nevertheless more challenging analytically than for current-based synapses: where possible analytical results are provided with an efficient numerical scheme and code provided for other quantities. The increased tractability of the framework developed supports an ongoing critical comparison between models in which synapses are treated with and without reversal potentials, such as recently in the context of networks with balanced excitatory and inhibitory conductances. |
2407.15908 | Kevin Mitchell | Kevin J. Mitchell and Nick Cheney | The Genomic Code: The genome instantiates a generative model of the
organism | 31 pages, 4 figures | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by/4.0/ | How does the genome encode the form of the organism? What is the nature of
this genomic code? Common metaphors, such as a blueprint or program, fail to
capture the complex, indirect, and evolutionarily dynamic relationship between
the genome and organismal form, or the constructive, interactive processes that
produce it. Such metaphors are also not readily formalised, either to treat
empirical data or to simulate genomic encoding of form in silico. Here, we
propose a new analogy, inspired by recent work in machine learning and
neuroscience: that the genome encodes a generative model of the organism. In
this scheme, by analogy with variational autoencoders, the genome does not
encode either organismal form or developmental processes directly, but
comprises a compressed space of latent variables. These latent variables are
the DNA sequences that specify the biochemical properties of encoded proteins
and the relative affinities between trans-acting regulatory factors and their
target sequence elements. Collectively, these comprise a connectionist network,
with weights that get encoded by the learning algorithm of evolution and
decoded through the processes of development. The latent variables collectively
shape an energy landscape that constrains the self-organising processes of
development so as to reliably produce a new individual of a certain type,
providing a direct analogy to Waddingtons famous epigenetic landscape. The
generative model analogy accounts for the complex, distributed genetic
architecture of most traits and the emergent robustness and evolvability of
developmental processes. It also provides a new way to explain the independent
selectability of specific traits, drawing on the idea of multiplexed
disentangled representations observed in artificial and neural systems and
lends itself to formalisation.
| [
{
"created": "Mon, 22 Jul 2024 16:41:25 GMT",
"version": "v1"
}
] | 2024-07-24 | [
[
"Mitchell",
"Kevin J.",
""
],
[
"Cheney",
"Nick",
""
]
] | How does the genome encode the form of the organism? What is the nature of this genomic code? Common metaphors, such as a blueprint or program, fail to capture the complex, indirect, and evolutionarily dynamic relationship between the genome and organismal form, or the constructive, interactive processes that produce it. Such metaphors are also not readily formalised, either to treat empirical data or to simulate genomic encoding of form in silico. Here, we propose a new analogy, inspired by recent work in machine learning and neuroscience: that the genome encodes a generative model of the organism. In this scheme, by analogy with variational autoencoders, the genome does not encode either organismal form or developmental processes directly, but comprises a compressed space of latent variables. These latent variables are the DNA sequences that specify the biochemical properties of encoded proteins and the relative affinities between trans-acting regulatory factors and their target sequence elements. Collectively, these comprise a connectionist network, with weights that get encoded by the learning algorithm of evolution and decoded through the processes of development. The latent variables collectively shape an energy landscape that constrains the self-organising processes of development so as to reliably produce a new individual of a certain type, providing a direct analogy to Waddingtons famous epigenetic landscape. The generative model analogy accounts for the complex, distributed genetic architecture of most traits and the emergent robustness and evolvability of developmental processes. It also provides a new way to explain the independent selectability of specific traits, drawing on the idea of multiplexed disentangled representations observed in artificial and neural systems and lends itself to formalisation. |
2401.13690 | Mark J. Hadley Dr | Mark J. Hadley | A generic model of consciousness | null | Journal of Artificial Intelligence and Consciousness 10
(2):291--308 (2023) | 10.1142/S2705078523500030 | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | This is a model of consciousness. The hard problem of consciousness, what it
feels like, is answered. The work builds on medical research analyzing the
source and mechanisms associated with our feelings. It goes further by
describing a generic model with wide applicability. The model is fully
consistent with medical pathways in humans, but easily extends to animals and
AI. The essence of the model is the interplay between associative memory and
physiology. The model is a clear and concrete counterexample to the famous
philosophical objections to a scientific explanation.
| [
{
"created": "Tue, 2 Jan 2024 11:41:28 GMT",
"version": "v1"
}
] | 2024-01-26 | [
[
"Hadley",
"Mark J.",
""
]
] | This is a model of consciousness. The hard problem of consciousness, what it feels like, is answered. The work builds on medical research analyzing the source and mechanisms associated with our feelings. It goes further by describing a generic model with wide applicability. The model is fully consistent with medical pathways in humans, but easily extends to animals and AI. The essence of the model is the interplay between associative memory and physiology. The model is a clear and concrete counterexample to the famous philosophical objections to a scientific explanation. |
0709.1916 | Andrea Cavagna | M. Ballerini, N. Cabibbo, R. Candelier, A. Cavagna, E. Cisbani, I.
Giardina, V. Lecomte, A. Orlandi, G. Parisi, A. Procaccini, M. Viale, V.
Zdravkovic | Interaction Ruling Animal Collective Behaviour Depends on Topological
rather than Metric Distance: Evidence from a Field Study | To be submitted to PNAS - 25 pages | PNAS, 105, 1232-1237 (2008) | 10.1073/pnas.0711437105 | null | q-bio.PE cond-mat.stat-mech | null | Numerical models indicate that collective animal behaviour may emerge from
simple local rules of interaction among the individuals. However, very little
is known about the nature of such interaction, so that models and theories
mostly rely on aprioristic assumptions. By reconstructing the three-dimensional
position of individual birds in airborne flocks of few thousands members, we
prove that the interaction does not depend on the metric distance, as most
current models and theories assume, but rather on the topological distance. In
fact, we discover that each bird interacts on average with a fixed number of
neighbours (six-seven), rather than with all neighbours within a fixed metric
distance. We argue that a topological interaction is indispensable to maintain
flock's cohesion against the large density changes caused by external
perturbations, typically predation. We support this hypothesis by numerical
simulations, showing that a topological interaction grants significantly higher
cohesion of the aggregation compared to a standard metric one.
| [
{
"created": "Wed, 12 Sep 2007 14:53:17 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Ballerini",
"M.",
""
],
[
"Cabibbo",
"N.",
""
],
[
"Candelier",
"R.",
""
],
[
"Cavagna",
"A.",
""
],
[
"Cisbani",
"E.",
""
],
[
"Giardina",
"I.",
""
],
[
"Lecomte",
"V.",
""
],
[
"Orlandi",
"A.",
""
],
[
"Parisi",
"G.",
""
],
[
"Procaccini",
"A.",
""
],
[
"Viale",
"M.",
""
],
[
"Zdravkovic",
"V.",
""
]
] | Numerical models indicate that collective animal behaviour may emerge from simple local rules of interaction among the individuals. However, very little is known about the nature of such interaction, so that models and theories mostly rely on aprioristic assumptions. By reconstructing the three-dimensional position of individual birds in airborne flocks of few thousands members, we prove that the interaction does not depend on the metric distance, as most current models and theories assume, but rather on the topological distance. In fact, we discover that each bird interacts on average with a fixed number of neighbours (six-seven), rather than with all neighbours within a fixed metric distance. We argue that a topological interaction is indispensable to maintain flock's cohesion against the large density changes caused by external perturbations, typically predation. We support this hypothesis by numerical simulations, showing that a topological interaction grants significantly higher cohesion of the aggregation compared to a standard metric one. |
1609.08297 | Namiko Mitarai | Cilie W. Feldager, Namiko Mitarai, and Hiroki Ohta | Deterministic extinction by mixing in cyclically competing species | 6 pages, 3 figures. More analysis added for even species case.
Accepted for publication in PRE | Phys. Rev. E 95, 032318 (2017) | 10.1103/PhysRevE.95.032318 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a cyclically competing species model on a ring with global mixing
at finite rate, which corresponds to the well-known Lotka-Volterra equation in
the limit of infinite mixing rate. Within a perturbation analysis of the model
from the infinite mixing rate, we provide analytical evidence that extinction
occurs deterministically at sufficiently large but finite values of the mixing
rate for any species number $N\ge3$. Further, by focusing on the cases of
rather small species numbers, we discuss numerical results concerning the
trajectories toward such deterministic extinction, including global
bifurcations caused by changing the mixing rate.
| [
{
"created": "Tue, 27 Sep 2016 07:51:55 GMT",
"version": "v1"
},
{
"created": "Sat, 4 Mar 2017 10:00:05 GMT",
"version": "v2"
}
] | 2017-03-28 | [
[
"Feldager",
"Cilie W.",
""
],
[
"Mitarai",
"Namiko",
""
],
[
"Ohta",
"Hiroki",
""
]
] | We consider a cyclically competing species model on a ring with global mixing at finite rate, which corresponds to the well-known Lotka-Volterra equation in the limit of infinite mixing rate. Within a perturbation analysis of the model from the infinite mixing rate, we provide analytical evidence that extinction occurs deterministically at sufficiently large but finite values of the mixing rate for any species number $N\ge3$. Further, by focusing on the cases of rather small species numbers, we discuss numerical results concerning the trajectories toward such deterministic extinction, including global bifurcations caused by changing the mixing rate. |
2203.05191 | Hamza Altakroury Dr. | Hamza Altakroury, Laurent Koessler, Radu Ranta, Janis Hofmanis, Sophie
Colnat Coulbois, Louis Maillard and Val\'erie Louis Dorr | Evaluation of Performance for Human In-Vivo Conductivity Estimation from
EEG and sEEG Recorded in Simultaneous with Intracerebral Electrical
Stimulation | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Assigning accurate conductivity values in human head models is an essential
factor for performing precise electroencephalographic (EEG) source localization
and targeting of transcranial electrical stimulation (TES). Unfortunately, the
literature reports diverging conductivity values of the different tissues in
the human head. The current study analyzes first the performance of in-vivo
conductivity estimation for different configurations concerning the
localization of the electrical source and measurement. Then, it presents
conductivity estimates for three epileptic patients using scalp EEG and
intracerebral stereotactic EEG (sEEG) acquired in simultaneous with
intracerebral electrical stimulation. The estimates of the conductivities were
based on finite-element models of the human head with five tissue compartments
of homogeneous and isotropic conductivities. The results of this study show
that in-vivo conductivity estimation can lead to different estimated
conductivities for the same patient when considering different stimulation
positions, different measurement positions or different measurement modalities
(sEEG or EEG). This work provides important guidelines to in-vivo conductivity
estimation and explains the variability among the conductivity values which
have been reported in previous studies.
| [
{
"created": "Thu, 10 Mar 2022 07:02:44 GMT",
"version": "v1"
}
] | 2022-03-11 | [
[
"Altakroury",
"Hamza",
""
],
[
"Koessler",
"Laurent",
""
],
[
"Ranta",
"Radu",
""
],
[
"Hofmanis",
"Janis",
""
],
[
"Coulbois",
"Sophie Colnat",
""
],
[
"Maillard",
"Louis",
""
],
[
"Dorr",
"Valérie Louis",
""
]
] | Assigning accurate conductivity values in human head models is an essential factor for performing precise electroencephalographic (EEG) source localization and targeting of transcranial electrical stimulation (TES). Unfortunately, the literature reports diverging conductivity values of the different tissues in the human head. The current study analyzes first the performance of in-vivo conductivity estimation for different configurations concerning the localization of the electrical source and measurement. Then, it presents conductivity estimates for three epileptic patients using scalp EEG and intracerebral stereotactic EEG (sEEG) acquired in simultaneous with intracerebral electrical stimulation. The estimates of the conductivities were based on finite-element models of the human head with five tissue compartments of homogeneous and isotropic conductivities. The results of this study show that in-vivo conductivity estimation can lead to different estimated conductivities for the same patient when considering different stimulation positions, different measurement positions or different measurement modalities (sEEG or EEG). This work provides important guidelines to in-vivo conductivity estimation and explains the variability among the conductivity values which have been reported in previous studies. |
1109.2057 | Dimitri Vvedensky | Joshua S. Gill, Mischa P. Woods, Carolyn M. Salafia, and Dimitri D.
Vvedensky | Probability distributions for measures of placental shape and morphology | 26 pages, 7 figures, 2 tables | null | null | null | q-bio.QM cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Weight at delivery is a standard cumulative measure of placental growth. But
weight is a crude summary of other placental characteristics, such as the size
and shape of the chorionic plate and the location of the umbilical cord
insertion. Distributions of such measures across a cohort reveal information
about the developmental history of the chorionic plate that is unavailable from
an analysis based solely on the mean and standard deviation. Various measures
were determined from digitized images of chorionic plates obtained from the
Pregnancy, Infection, and Nutrition Study, a prospective cohort study of
preterm birth in central North Carolina between 2002 and 2004. The centroids
(the geometric centers) and umbilical cord insertions were taken directly from
the images. The chorionic plate outlines were obtained from an interpolation
based on a Fourier series, while eccentricity (of the best-fit ellipse),
skewness, and kurtosis were determined from a shape analysis using the method
of moments. The distribution of each variable was compared against the normal,
lognormal, and Levy distributions. We found only a single measure
(eccentricity) with a normal distribution. All other placental measures
required lognormal or "heavy-tailed" distributions to account for moderate to
extreme deviations from the mean, where relative likelihoods in the cohort far
exceeded those of a normal distribution. Normal and lognormal distributions
result from the accumulated effects of a large number of independent additive
(normal) or multiplicative (lognormal) events. Thus, while most placentas
appear to develop by a series of small, regular, and independent steps, the
presence of heavy-tailed distributions suggests that many show shape features
which are more consistent with a large number of correlated steps or fewer, but
substantially larger, independent steps.
| [
{
"created": "Fri, 9 Sep 2011 16:16:15 GMT",
"version": "v1"
}
] | 2011-09-12 | [
[
"Gill",
"Joshua S.",
""
],
[
"Woods",
"Mischa P.",
""
],
[
"Salafia",
"Carolyn M.",
""
],
[
"Vvedensky",
"Dimitri D.",
""
]
] | Weight at delivery is a standard cumulative measure of placental growth. But weight is a crude summary of other placental characteristics, such as the size and shape of the chorionic plate and the location of the umbilical cord insertion. Distributions of such measures across a cohort reveal information about the developmental history of the chorionic plate that is unavailable from an analysis based solely on the mean and standard deviation. Various measures were determined from digitized images of chorionic plates obtained from the Pregnancy, Infection, and Nutrition Study, a prospective cohort study of preterm birth in central North Carolina between 2002 and 2004. The centroids (the geometric centers) and umbilical cord insertions were taken directly from the images. The chorionic plate outlines were obtained from an interpolation based on a Fourier series, while eccentricity (of the best-fit ellipse), skewness, and kurtosis were determined from a shape analysis using the method of moments. The distribution of each variable was compared against the normal, lognormal, and Levy distributions. We found only a single measure (eccentricity) with a normal distribution. All other placental measures required lognormal or "heavy-tailed" distributions to account for moderate to extreme deviations from the mean, where relative likelihoods in the cohort far exceeded those of a normal distribution. Normal and lognormal distributions result from the accumulated effects of a large number of independent additive (normal) or multiplicative (lognormal) events. Thus, while most placentas appear to develop by a series of small, regular, and independent steps, the presence of heavy-tailed distributions suggests that many show shape features which are more consistent with a large number of correlated steps or fewer, but substantially larger, independent steps. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.