id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0801.0844 | E. Ahmed | E. Ahmed and A.S.Hegazi | Survival May Not be for the Fittest (Lessons from some TV games) | none | null | null | null | q-bio.PE | null | In this paper we argue that biological fitness is a multi-objective concept
hence the statement "fittest" is inappropriate. The following statement is
proposed "Survival is mostly for those with non-dominated fitness". Also we use
some TV games to show that under the following conditions: i) There are no
dominant players. ii) At each time step successful players may eliminate some
of their less successful competitors, Then the ultimate winner may not be the
fittest (but close).
| [
{
"created": "Sun, 6 Jan 2008 08:05:51 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Jan 2008 08:23:31 GMT",
"version": "v2"
}
] | 2008-01-10 | [
[
"Ahmed",
"E.",
""
],
[
"Hegazi",
"A. S.",
""
]
] | In this paper we argue that biological fitness is a multi-objective concept hence the statement "fittest" is inappropriate. The following statement is proposed "Survival is mostly for those with non-dominated fitness". Also we use some TV games to show that under the following conditions: i) There are no dominant players. ii) At each time step successful players may eliminate some of their less successful competitors, Then the ultimate winner may not be the fittest (but close). |
1506.04080 | V. G. Gurzadyan | V.G. Gurzadyan, H. Yan, G. Vlahovic, A. Kashin, P. Killela, Z.
Reitman, S. Sargsyan, G. Yegorian, G. Milledge, B. Vlahovic | Detecting somatic mutations in genomic sequences by means of
Kolmogorov-Arnold analysis | To appear in Royal Society Open Science, 12 pages, 2 figures | Royal Society Open Science, 2, 150143, 2015 | 10.1098/rsos.150143 | null | q-bio.GN physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Kolmogorov-Arnold stochasticity parameter technique is applied for the
first time to the study of cancer genome sequencing, to reveal mutations. Using
data generated by next generation sequencing technologies, we have analyzed the
exome sequences of brain tumor patients with matched tumor and normal blood. We
show that mutations contained in sequencing data can be revealed using this
technique thus providing a new methodology for determining subsequences of
given length containing mutations i.e. its value differs from those of
subsequences without mutations. A potential application for this technique
involves simplifying the procedure of finding segments with mutations, speeding
up genomic research, and accelerating its implementation in clinical
diagnostic. Moreover, the prediction of a mutation associated to a family of
frequent mutations in numerous types of cancers based purely on the value of
the Kolmogorov function, indicates that this applied marker may recognize
genomic sequences that are in extremely low abundance and can be used in
revealing new types of mutations.
| [
{
"created": "Fri, 12 Jun 2015 17:31:40 GMT",
"version": "v1"
}
] | 2018-11-05 | [
[
"Gurzadyan",
"V. G.",
""
],
[
"Yan",
"H.",
""
],
[
"Vlahovic",
"G.",
""
],
[
"Kashin",
"A.",
""
],
[
"Killela",
"P.",
""
],
[
"Reitman",
"Z.",
""
],
[
"Sargsyan",
"S.",
""
],
[
"Yegorian",
"G.",
""
],
[
"Milledge",
"G.",
""
],
[
"Vlahovic",
"B.",
""
]
] | The Kolmogorov-Arnold stochasticity parameter technique is applied for the first time to the study of cancer genome sequencing, to reveal mutations. Using data generated by next generation sequencing technologies, we have analyzed the exome sequences of brain tumor patients with matched tumor and normal blood. We show that mutations contained in sequencing data can be revealed using this technique thus providing a new methodology for determining subsequences of given length containing mutations i.e. its value differs from those of subsequences without mutations. A potential application for this technique involves simplifying the procedure of finding segments with mutations, speeding up genomic research, and accelerating its implementation in clinical diagnostic. Moreover, the prediction of a mutation associated to a family of frequent mutations in numerous types of cancers based purely on the value of the Kolmogorov function, indicates that this applied marker may recognize genomic sequences that are in extremely low abundance and can be used in revealing new types of mutations. |
2101.05336 | Anastasiya Belyaeva | Anastasiya Belyaeva, Kaie Kubjas, Lawrence J. Sun, Caroline Uhler | Identifying 3D Genome Organization in Diploid Organisms via Euclidean
Distance Geometry | null | null | null | null | q-bio.GN math.MG math.OC | http://creativecommons.org/licenses/by/4.0/ | The spatial organization of the DNA in the cell nucleus plays an important
role for gene regulation, DNA replication, and genomic integrity. Through the
development of chromosome conformation capture experiments (such as 3C, 4C,
Hi-C) it is now possible to obtain the contact frequencies of the DNA at the
whole-genome level. In this paper, we study the problem of reconstructing the
3D organization of the genome from such whole-genome contact frequencies. A
standard approach is to transform the contact frequencies into noisy distance
measurements and then apply semidefinite programming (SDP) formulations to
obtain the 3D configuration. However, neglected in such reconstructions is the
fact that most eukaryotes including humans are diploid and therefore contain
two copies of each genomic locus. We prove that the 3D organization of the DNA
is not identifiable from distance measurements derived from contact frequencies
in diploid organisms. In fact, there are infinitely many solutions even in the
noise-free setting. We then discuss various additional biologically relevant
and experimentally measurable constraints (including distances between
neighboring genomic loci and higher-order interactions) and prove
identifiability under these conditions. Furthermore, we provide SDP
formulations for computing the 3D embedding of the DNA with these additional
constraints and show that we can recover the true 3D embedding with high
accuracy from both noiseless and noisy measurements. Finally, we apply our
algorithm to real pairwise and higher-order contact frequency data and show
that we can recover known genome organization patterns.
| [
{
"created": "Wed, 13 Jan 2021 20:26:49 GMT",
"version": "v1"
}
] | 2021-01-15 | [
[
"Belyaeva",
"Anastasiya",
""
],
[
"Kubjas",
"Kaie",
""
],
[
"Sun",
"Lawrence J.",
""
],
[
"Uhler",
"Caroline",
""
]
] | The spatial organization of the DNA in the cell nucleus plays an important role for gene regulation, DNA replication, and genomic integrity. Through the development of chromosome conformation capture experiments (such as 3C, 4C, Hi-C) it is now possible to obtain the contact frequencies of the DNA at the whole-genome level. In this paper, we study the problem of reconstructing the 3D organization of the genome from such whole-genome contact frequencies. A standard approach is to transform the contact frequencies into noisy distance measurements and then apply semidefinite programming (SDP) formulations to obtain the 3D configuration. However, neglected in such reconstructions is the fact that most eukaryotes including humans are diploid and therefore contain two copies of each genomic locus. We prove that the 3D organization of the DNA is not identifiable from distance measurements derived from contact frequencies in diploid organisms. In fact, there are infinitely many solutions even in the noise-free setting. We then discuss various additional biologically relevant and experimentally measurable constraints (including distances between neighboring genomic loci and higher-order interactions) and prove identifiability under these conditions. Furthermore, we provide SDP formulations for computing the 3D embedding of the DNA with these additional constraints and show that we can recover the true 3D embedding with high accuracy from both noiseless and noisy measurements. Finally, we apply our algorithm to real pairwise and higher-order contact frequency data and show that we can recover known genome organization patterns. |
1907.12718 | Victor Solovyev | Oleg Fokin (1), Anastasia Bakulina (1 and 2), Igor Seledtsov (1) and
Victor Solovyev (1) | ReadsClean: a new approach to error correction of sequencing reads based
on alignments clustering | 13 pages, 2 figures, 9 tables, Supplemental information | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: Next generation methods of DNA sequencing produce relatively high
rate of reading errors, which interfere with de novo genome assembly of newly
sequenced organisms and particularly affect the quality of SNP detection
important for diagnostics of many hereditary diseases. There exists a number of
programs developed for correcting errors in NGS reads. Such programs utilize
various approaches and are optimized for different specific tasks, but all of
them are far from being able to correct all errors, especially in sequencing
reads that crossing by repeats and DNA from di/polyploid eukaryotic genomes.
Results: This paper describes a novel method of error correction based on
clustering of alignments of similar reads. This method is implemented in
ReadsClean program, which is designed for cleaning Illumina HiSeq sequencing
reads. We compared ReadsClean to other reads cleaning programs recognized to be
the best by several publications. Our sequence assembly tests using actual and
simulated sequencing reads show superior results achieved by ReadsClean.
Availability and implementation: ReadsClean is implemented as a standalone C
code. It is incorporated in an error correction pipeline and is freely
available to academic users at Softberry web server www.softberry.com.
| [
{
"created": "Tue, 30 Jul 2019 03:08:29 GMT",
"version": "v1"
}
] | 2019-07-31 | [
[
"Fokin",
"Oleg",
"",
"1 and 2"
],
[
"Bakulina",
"Anastasia",
"",
"1 and 2"
],
[
"Seledtsov",
"Igor",
""
],
[
"Solovyev",
"Victor",
""
]
] | Motivation: Next generation methods of DNA sequencing produce relatively high rate of reading errors, which interfere with de novo genome assembly of newly sequenced organisms and particularly affect the quality of SNP detection important for diagnostics of many hereditary diseases. There exists a number of programs developed for correcting errors in NGS reads. Such programs utilize various approaches and are optimized for different specific tasks, but all of them are far from being able to correct all errors, especially in sequencing reads that crossing by repeats and DNA from di/polyploid eukaryotic genomes. Results: This paper describes a novel method of error correction based on clustering of alignments of similar reads. This method is implemented in ReadsClean program, which is designed for cleaning Illumina HiSeq sequencing reads. We compared ReadsClean to other reads cleaning programs recognized to be the best by several publications. Our sequence assembly tests using actual and simulated sequencing reads show superior results achieved by ReadsClean. Availability and implementation: ReadsClean is implemented as a standalone C code. It is incorporated in an error correction pipeline and is freely available to academic users at Softberry web server www.softberry.com. |
2308.04381 | Genji Kawakita | Genji Kawakita, Ariel Zeleznikow-Johnston, Naotsugu Tsuchiya, Masafumi
Oizumi | Gromov-Wasserstein unsupervised alignment reveals structural
correspondences between the color similarity structures of humans and large
language models | null | Sci Rep 14, 15917 (2024) | 10.1038/s41598-024-65604-1 10.1038/s41598-024-65604-1
10.1038/s41598-024-65604-1 | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs), such as the General Pre-trained Transformer
(GPT), have shown remarkable performance in various cognitive tasks. However,
it remains unclear whether these models have the ability to accurately infer
human perceptual representations. Previous research has addressed this question
by quantifying correlations between similarity response patterns of humans and
LLMs. Correlation provides a measure of similarity, but it relies pre-defined
item labels and does not distinguish category- and item- level similarity,
falling short of characterizing detailed structural correspondence between
humans and LLMs. To assess their structural equivalence in more detail, we
propose the use of an unsupervised alignment method based on Gromov-Wasserstein
optimal transport (GWOT). GWOT allows for the comparison of similarity
structures without relying on pre-defined label correspondences and can reveal
fine-grained structural similarities and differences that may not be detected
by simple correlation analysis. Using a large dataset of similarity judgments
of 93 colors, we compared the color similarity structures of humans
(color-neurotypical and color-atypical participants) and two GPT models
(GPT-3.5 and GPT-4). Our results show that the similarity structure of
color-neurotypical participants can be remarkably well aligned with that of
GPT-4 and, to a lesser extent, to that of GPT-3.5. These results contribute to
the methodological advancements of comparing LLMs with human perception, and
highlight the potential of unsupervised alignment methods to reveal detailed
structural correspondences. This work has been published in Scientific Reports,
DOI: https://doi.org/10.1038/s41598-024-65604-1.
| [
{
"created": "Tue, 8 Aug 2023 16:32:41 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Jun 2024 10:41:08 GMT",
"version": "v2"
},
{
"created": "Tue, 13 Aug 2024 17:11:25 GMT",
"version": "v3"
}
] | 2024-08-16 | [
[
"Kawakita",
"Genji",
""
],
[
"Zeleznikow-Johnston",
"Ariel",
""
],
[
"Tsuchiya",
"Naotsugu",
""
],
[
"Oizumi",
"Masafumi",
""
]
] | Large Language Models (LLMs), such as the General Pre-trained Transformer (GPT), have shown remarkable performance in various cognitive tasks. However, it remains unclear whether these models have the ability to accurately infer human perceptual representations. Previous research has addressed this question by quantifying correlations between similarity response patterns of humans and LLMs. Correlation provides a measure of similarity, but it relies pre-defined item labels and does not distinguish category- and item- level similarity, falling short of characterizing detailed structural correspondence between humans and LLMs. To assess their structural equivalence in more detail, we propose the use of an unsupervised alignment method based on Gromov-Wasserstein optimal transport (GWOT). GWOT allows for the comparison of similarity structures without relying on pre-defined label correspondences and can reveal fine-grained structural similarities and differences that may not be detected by simple correlation analysis. Using a large dataset of similarity judgments of 93 colors, we compared the color similarity structures of humans (color-neurotypical and color-atypical participants) and two GPT models (GPT-3.5 and GPT-4). Our results show that the similarity structure of color-neurotypical participants can be remarkably well aligned with that of GPT-4 and, to a lesser extent, to that of GPT-3.5. These results contribute to the methodological advancements of comparing LLMs with human perception, and highlight the potential of unsupervised alignment methods to reveal detailed structural correspondences. This work has been published in Scientific Reports, DOI: https://doi.org/10.1038/s41598-024-65604-1. |
2012.00672 | Guojing Cong | David Bell, Giacomo Domeniconi, Chih-Chieh Yang, Ruhong Zhou, Leili
Zhang, Guojing Cong | Dynamics-based peptide-MHC binding optimization by a convolutional
variational autoencoder: a use-case model for CASTELO | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | An unsolved challenge in the development of antigen specific immunotherapies
is determining the optimal antigens to target. Comprehension of antigen-MHC
binding is paramount towards achieving this goal. Here, we present CASTELO, a
combined machine learning-molecular dynamics (ML-MD) approach to design novel
antigens of increased MHC binding affinity for a Type 1 diabetes
(T1D)-implicated system. We build upon a small molecule lead optimization
algorithm by training a convolutional variational autoencoder (CVAE) on MD
trajectories of 48 different systems across 4 antigens and 4 HLA serotypes. We
develop several new machine learning metrics including a structure-based anchor
residue classification model as well as cluster comparison scores. ML-MD
predictions agree well with experimental binding results and free energy
perturbation-predicted binding affinities. Moreover, ML-MD metrics are
independent of traditional MD stability metrics such as contact area and RMSF,
which do not reflect binding affinity data. Our work supports the role of
structure-based deep learning techniques in antigen specific immunotherapy
design.
| [
{
"created": "Sun, 29 Nov 2020 13:41:18 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Dec 2020 13:57:31 GMT",
"version": "v2"
}
] | 2020-12-09 | [
[
"Bell",
"David",
""
],
[
"Domeniconi",
"Giacomo",
""
],
[
"Yang",
"Chih-Chieh",
""
],
[
"Zhou",
"Ruhong",
""
],
[
"Zhang",
"Leili",
""
],
[
"Cong",
"Guojing",
""
]
] | An unsolved challenge in the development of antigen specific immunotherapies is determining the optimal antigens to target. Comprehension of antigen-MHC binding is paramount towards achieving this goal. Here, we present CASTELO, a combined machine learning-molecular dynamics (ML-MD) approach to design novel antigens of increased MHC binding affinity for a Type 1 diabetes (T1D)-implicated system. We build upon a small molecule lead optimization algorithm by training a convolutional variational autoencoder (CVAE) on MD trajectories of 48 different systems across 4 antigens and 4 HLA serotypes. We develop several new machine learning metrics including a structure-based anchor residue classification model as well as cluster comparison scores. ML-MD predictions agree well with experimental binding results and free energy perturbation-predicted binding affinities. Moreover, ML-MD metrics are independent of traditional MD stability metrics such as contact area and RMSF, which do not reflect binding affinity data. Our work supports the role of structure-based deep learning techniques in antigen specific immunotherapy design. |
1509.07990 | Raymond Goldstein | Philipp Khuc Trong, H\'el\`ene Doerflinger, J\"orn Dunkel, Daniel St.
Johnston, and Raymond E. Goldstein | Cortical microtubule nucleation can organise the cytoskeleton of
$Drosophila$ oocytes to define the anteroposterior axis | 54 pages, 12 figures, open access, see additional information online
at eLife | eLife 4, e06088 (2015) | 10.7554/eLife.06088 | null | q-bio.SC cond-mat.soft physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | Many cells contain non-centrosomal arrays of microtubules (MT), but the
assembly, organisation and function of these arrays are poorly understood. We
present the first theoretical model for the non-centrosomal MT cytoskeleton in
$Drosophila$ oocytes, in which $bicoid$ and $oskar$ mRNAs become localised to
establish the anterior-posterior body axis. Constrained by experimental
measurements, the model shows that a simple gradient of cortical MT nucleation
is sufficient to reproduce the observed MT distribution, cytoplasmic flow
patterns and localisation of $oskar$ and naive $bicoid$ mRNAs. Our simulations
exclude a major role for cytoplasmic flows in localisation and reveal an
organisation of the MT cytoskeleton that is more ordered than previously
thought. Furthermore, modulating cortical MT nucleation induces a bifurcation
in cytoskeletal organisation that accounts for the phenotypes of polarity
mutants. Thus, our three-dimensional model explains many features of the MT
network and highlights the importance of differential cortical MT nucleation
for axis formation.
| [
{
"created": "Sat, 26 Sep 2015 15:07:23 GMT",
"version": "v1"
}
] | 2015-09-29 | [
[
"Trong",
"Philipp Khuc",
""
],
[
"Doerflinger",
"Hélène",
""
],
[
"Dunkel",
"Jörn",
""
],
[
"Johnston",
"Daniel St.",
""
],
[
"Goldstein",
"Raymond E.",
""
]
] | Many cells contain non-centrosomal arrays of microtubules (MT), but the assembly, organisation and function of these arrays are poorly understood. We present the first theoretical model for the non-centrosomal MT cytoskeleton in $Drosophila$ oocytes, in which $bicoid$ and $oskar$ mRNAs become localised to establish the anterior-posterior body axis. Constrained by experimental measurements, the model shows that a simple gradient of cortical MT nucleation is sufficient to reproduce the observed MT distribution, cytoplasmic flow patterns and localisation of $oskar$ and naive $bicoid$ mRNAs. Our simulations exclude a major role for cytoplasmic flows in localisation and reveal an organisation of the MT cytoskeleton that is more ordered than previously thought. Furthermore, modulating cortical MT nucleation induces a bifurcation in cytoskeletal organisation that accounts for the phenotypes of polarity mutants. Thus, our three-dimensional model explains many features of the MT network and highlights the importance of differential cortical MT nucleation for axis formation. |
q-bio/0510002 | Herbert Sauro Dr | Vijay Chickarmane, Ali Nadim, Animesh Ray and Herbert M. Sauro | A p53 Oscillator Model of DNA Break Repair Control | 31 pages, 8 figures | null | null | null | q-bio.MN | null | The transcription factor p53 is an important regulator of cell fate.
Mutations in p53 gene are associated with many cancers. In response to signals
such as DNA damage, p53 controls the transcription of a series of genes that
cause cell cycle arrest during which DNA damage is repaired, or triggers
programmed cell death that eliminates possibly cancerous cells wherein DNA
damage might have remained unrepaired. Previous experiments showed oscillations
in p53 level in response to DNA damage, but the mechanism of oscillation
remained unclear. Here we examine a model where the concentrations of p53
isoforms are regulated by Mdm22, Arf, Siah, and beta-catenin. The extent of DNA
damage is signalled through the switch-like activity of a DNA damage sensor,
the DNA-dependent protein kinase Atm. This switch is responsible for initiating
and terminating oscillations in p53 concentration. The strength of the DNA
damage signal modulates the number of oscillatory waves of p53 and Mdm22 but
not the frequency or amplitude of oscillations{a result that recapitulates
experimental findings. A critical fnding was that the phosphorylated form of
Nbs11, a member of the DNA break repair complex Mre11-Rad50-Nbs11 (MRN), must
augment the activity of Atm kinase. While there is in vitro support for this
assumption, this activity appears essential for p53 dynamics. The model
provides several predictions concerning, among others, degradation of the
phosphorylated form of p53, the rate of DNA repair as a function of DNA damage,
the sensitivity of p53 oscillation to transcription rates of SIAH, beta-CATENIN
and ARF, and the hysteretic behavior of active Atm kinase levels with respect
to the DNA damage signal
| [
{
"created": "Sat, 1 Oct 2005 00:11:33 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Chickarmane",
"Vijay",
""
],
[
"Nadim",
"Ali",
""
],
[
"Ray",
"Animesh",
""
],
[
"Sauro",
"Herbert M.",
""
]
] | The transcription factor p53 is an important regulator of cell fate. Mutations in p53 gene are associated with many cancers. In response to signals such as DNA damage, p53 controls the transcription of a series of genes that cause cell cycle arrest during which DNA damage is repaired, or triggers programmed cell death that eliminates possibly cancerous cells wherein DNA damage might have remained unrepaired. Previous experiments showed oscillations in p53 level in response to DNA damage, but the mechanism of oscillation remained unclear. Here we examine a model where the concentrations of p53 isoforms are regulated by Mdm22, Arf, Siah, and beta-catenin. The extent of DNA damage is signalled through the switch-like activity of a DNA damage sensor, the DNA-dependent protein kinase Atm. This switch is responsible for initiating and terminating oscillations in p53 concentration. The strength of the DNA damage signal modulates the number of oscillatory waves of p53 and Mdm22 but not the frequency or amplitude of oscillations{a result that recapitulates experimental findings. A critical fnding was that the phosphorylated form of Nbs11, a member of the DNA break repair complex Mre11-Rad50-Nbs11 (MRN), must augment the activity of Atm kinase. While there is in vitro support for this assumption, this activity appears essential for p53 dynamics. The model provides several predictions concerning, among others, degradation of the phosphorylated form of p53, the rate of DNA repair as a function of DNA damage, the sensitivity of p53 oscillation to transcription rates of SIAH, beta-CATENIN and ARF, and the hysteretic behavior of active Atm kinase levels with respect to the DNA damage signal |
2311.09236 | Peter Coppola PhD | Peter Coppola | A review of the sufficient conditions for consciousness | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | How subjective experience (i.e., consciousness) arises out of objective
material processes has been called the hard problem. The neuroscience of
consciousness has set out to find the sufficient conditions for consciousness
and theoretical and empirical endeavours have placed a particular focus on the
cortex and subcortex, whilst discounting the cerebellum. However, when looking
at neuroimaging research, it becomes clear there is substantial evidence that
cerebellar, cortical and subcortical functions are correlated with
consciousness. Neurostimulation evidence suggests that alterations in any part
of the brain may provoke alterations in experience, but the most extreme
changes are provoked via the subcortex. I then evaluate neuropsychological
evidence and find abnormality in any part of the brain may provoke changes in
experience; but only damage to the oldest regions seem to completely obliterate
experience. Finally, I review congenital and experimental decorticate cases,
and find that behavioral evidence of experience is largely compatible with the
absence of the cortex. The evidence, taken together, indicates that the body,
subcortex and environment are sufficient for behaviours that suggest bastic
experiences. I then emphasise both the importance of the individual's
developmental trajectory and the interdependencies between different neural
systems.
| [
{
"created": "Fri, 20 Oct 2023 15:50:41 GMT",
"version": "v1"
}
] | 2023-11-17 | [
[
"Coppola",
"Peter",
""
]
] | How subjective experience (i.e., consciousness) arises out of objective material processes has been called the hard problem. The neuroscience of consciousness has set out to find the sufficient conditions for consciousness and theoretical and empirical endeavours have placed a particular focus on the cortex and subcortex, whilst discounting the cerebellum. However, when looking at neuroimaging research, it becomes clear there is substantial evidence that cerebellar, cortical and subcortical functions are correlated with consciousness. Neurostimulation evidence suggests that alterations in any part of the brain may provoke alterations in experience, but the most extreme changes are provoked via the subcortex. I then evaluate neuropsychological evidence and find abnormality in any part of the brain may provoke changes in experience; but only damage to the oldest regions seem to completely obliterate experience. Finally, I review congenital and experimental decorticate cases, and find that behavioral evidence of experience is largely compatible with the absence of the cortex. The evidence, taken together, indicates that the body, subcortex and environment are sufficient for behaviours that suggest bastic experiences. I then emphasise both the importance of the individual's developmental trajectory and the interdependencies between different neural systems. |
q-bio/0506020 | Jayprokas Chakrabarti | Bibekanand Mallick, Jayprokas Chakrabarti, Satyabrata Sahoo, Zhumur
Ghosh and Smarajit Das | Identity Elements of Archaeal tRNA | null | DNA Research 12, 235--246 (2005) | 10.1093/dnares/dsi008 | null | q-bio.GN | null | Features unique to a transfer-RNA are recognized by the corresponding
tRNA-synthetase. Keeping this in view we isolate the discriminating features of
all archaeal tRNA. These are our identity elements. Further, we investigate
tRNA-characteristics that delineate the different orders of archaea.
| [
{
"created": "Thu, 16 Jun 2005 04:55:57 GMT",
"version": "v1"
},
{
"created": "Sat, 27 May 2006 13:57:40 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Mallick",
"Bibekanand",
""
],
[
"Chakrabarti",
"Jayprokas",
""
],
[
"Sahoo",
"Satyabrata",
""
],
[
"Ghosh",
"Zhumur",
""
],
[
"Das",
"Smarajit",
""
]
] | Features unique to a transfer-RNA are recognized by the corresponding tRNA-synthetase. Keeping this in view we isolate the discriminating features of all archaeal tRNA. These are our identity elements. Further, we investigate tRNA-characteristics that delineate the different orders of archaea. |
1108.0736 | Ilya M. Nemenman | Ilya Nemenman | Gain control in molecular information processing: Lessons from
neuroscience | 10 pages, 5 figures | Phys. Biol. 9 026003, 2012 | 10.1088/1478-3975/9/2/026003 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Statistical properties of environments experienced by biological signaling
systems in the real world change, which necessitate adaptive responses to
achieve high fidelity information transmission. One form of such adaptive
response is gain control. Here we argue that a certain simple mechanism of gain
control, understood well in the context of systems neuroscience, also works for
molecular signaling. The mechanism allows to transmit more than one bit (on or
off) of information about the signal independently of the signal variance. It
does not require additional molecular circuitry beyond that already present in
many molecular systems, and, in particular, it does not depend on existence of
feedback loops. The mechanism provides a potential explanation for abundance of
ultrasensitive response curves in biological regulatory networks.
| [
{
"created": "Wed, 3 Aug 2011 04:35:52 GMT",
"version": "v1"
}
] | 2012-05-01 | [
[
"Nemenman",
"Ilya",
""
]
] | Statistical properties of environments experienced by biological signaling systems in the real world change, which necessitate adaptive responses to achieve high fidelity information transmission. One form of such adaptive response is gain control. Here we argue that a certain simple mechanism of gain control, understood well in the context of systems neuroscience, also works for molecular signaling. The mechanism allows to transmit more than one bit (on or off) of information about the signal independently of the signal variance. It does not require additional molecular circuitry beyond that already present in many molecular systems, and, in particular, it does not depend on existence of feedback loops. The mechanism provides a potential explanation for abundance of ultrasensitive response curves in biological regulatory networks. |
1901.04803 | Fulgensia Mbabazi | Fulgensia Kamugisha Mbabazi, Joseph Y.T. Mugisha, Mark Kimathi | Hopf-bifurcation analysis of pneumococcal pneumonia with time delays | 36 pages, 24 figures | null | null | null | q-bio.PE math.CA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a mathematical model of pneumococcal pneumonia with time
delays is proposed. The stability theory of delay differential equations is
used to analyze the model. The results show that the disease-free equilibrium
is asymptotically stable if the control reproduction ratio R0 is less than
unity and unstable otherwise. The stability of equilibria with delays shows
that the endemic equilibrium is locally stable without delays and stable if the
delays are under conditions. The existence of Hopf-bifurcation is investigated
and transversality conditions proved. The model results suggest that as the
respective delays exceed some critical value past the endemic equilibrium, the
system loses stability through the process of local birth or death of
oscillations. Further, a decrease or an increase in the delays leads to
asymptotic stability or instability of the endemic equilibrium respectively.
The analytical results, are supported by numerical simulations.
Keywords: Time delay, Pneumococcal pneumonia, Vaccination, Stability,
Hopf-bifurcation
| [
{
"created": "Tue, 15 Jan 2019 13:07:43 GMT",
"version": "v1"
}
] | 2019-01-16 | [
[
"Mbabazi",
"Fulgensia Kamugisha",
""
],
[
"Mugisha",
"Joseph Y. T.",
""
],
[
"Kimathi",
"Mark",
""
]
] | In this paper, a mathematical model of pneumococcal pneumonia with time delays is proposed. The stability theory of delay differential equations is used to analyze the model. The results show that the disease-free equilibrium is asymptotically stable if the control reproduction ratio R0 is less than unity and unstable otherwise. The stability of equilibria with delays shows that the endemic equilibrium is locally stable without delays and stable if the delays are under conditions. The existence of Hopf-bifurcation is investigated and transversality conditions proved. The model results suggest that as the respective delays exceed some critical value past the endemic equilibrium, the system loses stability through the process of local birth or death of oscillations. Further, a decrease or an increase in the delays leads to asymptotic stability or instability of the endemic equilibrium respectively. The analytical results, are supported by numerical simulations. Keywords: Time delay, Pneumococcal pneumonia, Vaccination, Stability, Hopf-bifurcation |
1906.07546 | Qiongge Li | Qiongge Li, Gino Del Ferraro, Luca Pasquini, Kyung K. Peck, Hernan A.
Makse and Andrei I. Holodny | Core language brain network for fMRI-language task used in clinical
applications | 14 pages, 7 figures | null | null | null | q-bio.NC physics.bio-ph physics.med-ph physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Functional magnetic resonance imaging (fMRI) is widely used in clinical
applications to highlight brain areas involved in specific cognitive processes.
Brain impairments, such as tumors, suppress the fMRI activation of the
anatomical areas they invade and, thus, brain-damaged functional networks
present missing links/areas of activation. The identification of the missing
circuitry components is of crucial importance to estimate the damage extent.
The study of functional networks associated to clinical tasks but performed by
healthy individuals becomes, therefore, of paramount concern. These `healthy'
networks can, indeed, be used as control networks for clinical studies. In this
work we investigate the functional architecture of 20 healthy individuals
performing a language task designed for clinical purposes. We unveil a common
architecture persistent across all subjects under study, which involves Broca's
area, Wernicke's area, the Premotor area, and the pre-Supplementary motor area.
We study the connectivity weight of this circuitry by using the k-core
centrality measure and we find that three of these areas belong to the most
robust structure of the functional language network for the specific task under
study. Our results provide useful insight for clinical applications on
primarily important functional connections which, thus, should be preserved
through brain surgery.
| [
{
"created": "Wed, 12 Jun 2019 19:15:15 GMT",
"version": "v1"
}
] | 2019-06-20 | [
[
"Li",
"Qiongge",
""
],
[
"Del Ferraro",
"Gino",
""
],
[
"Pasquini",
"Luca",
""
],
[
"Peck",
"Kyung K.",
""
],
[
"Makse",
"Hernan A.",
""
],
[
"Holodny",
"Andrei I.",
""
]
] | Functional magnetic resonance imaging (fMRI) is widely used in clinical applications to highlight brain areas involved in specific cognitive processes. Brain impairments, such as tumors, suppress the fMRI activation of the anatomical areas they invade and, thus, brain-damaged functional networks present missing links/areas of activation. The identification of the missing circuitry components is of crucial importance to estimate the damage extent. The study of functional networks associated to clinical tasks but performed by healthy individuals becomes, therefore, of paramount concern. These `healthy' networks can, indeed, be used as control networks for clinical studies. In this work we investigate the functional architecture of 20 healthy individuals performing a language task designed for clinical purposes. We unveil a common architecture persistent across all subjects under study, which involves Broca's area, Wernicke's area, the Premotor area, and the pre-Supplementary motor area. We study the connectivity weight of this circuitry by using the k-core centrality measure and we find that three of these areas belong to the most robust structure of the functional language network for the specific task under study. Our results provide useful insight for clinical applications on primarily important functional connections which, thus, should be preserved through brain surgery. |
1802.02378 | Francoise Schoentgen | Francoise Schoentgen and Slavica Jonic | PEBP1/RKIP: from multiple functions to a common role in cellular
processes | This is a review article | null | null | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | PEBPs (PhosphatidylEthanolamine Binding Proteins) form a protein family
widely present in the living world since they are encountered in
microorganisms, plants and animals. In all organisms PEBPs appear to regulate
important mechanisms that govern cell cycle, proliferation, differentiation and
motility. In humans, three PEBPs have been identified, namely PEBP1, PEBP2 and
PEBP4. PEBP1 and PEBP4 are the most studied as they are implicated in the
development of various cancers. PEBP2 is specific of testes in mammals and was
essentially studied in rats and mice where it is very abundant. A lot of
information has been gained on PEBP1 also named RKIP (Raf Kinase Inhibitory
protein) due to its role as a metastasis suppressor in cancer. PEBP1 was also
demonstrated to be implicated in Alzheimers disease, diabetes and
nephropathies. Furthermore, PEBP1 was described to be involved in many cellular
processes, among them are signal transduction, inflammation, cell cycle,
proliferation, adhesion, differentiation, apoptosis, autophagy, circadian
rhythm and mitotic spindle checkpoint. On the molecular level, PEBP1 was shown
to regulate several signaling pathways such as Raf/MEK/ERK, NFkB,
PI3K/Akt/mTOR, p38, Notch and Wnt. PEBP1 acts by inhibiting most of the kinases
of these signaling cascades. Moreover, PEBP1 is able to bind to a variety of
small ligands such as ATP, phospholipids, nucleotides, flavonoids or drugs.
Considering PEBP1 is a small cytoplasmic protein (21kDa), its involvement in so
many diseases and cellular mechanisms is amazing. The aim of this review is to
highlight the molecular systems that are common to all these cellular
mechanisms in order to decipher the specific role of PEBP1. Recent discoveries
enable us to propose that PEBP1 is a modulator of molecular interactions that
control signal transduction during membrane and cytoskeleton reorganization.
| [
{
"created": "Wed, 7 Feb 2018 10:39:48 GMT",
"version": "v1"
}
] | 2018-02-08 | [
[
"Schoentgen",
"Francoise",
""
],
[
"Jonic",
"Slavica",
""
]
] | PEBPs (PhosphatidylEthanolamine Binding Proteins) form a protein family widely present in the living world since they are encountered in microorganisms, plants and animals. In all organisms PEBPs appear to regulate important mechanisms that govern cell cycle, proliferation, differentiation and motility. In humans, three PEBPs have been identified, namely PEBP1, PEBP2 and PEBP4. PEBP1 and PEBP4 are the most studied as they are implicated in the development of various cancers. PEBP2 is specific of testes in mammals and was essentially studied in rats and mice where it is very abundant. A lot of information has been gained on PEBP1 also named RKIP (Raf Kinase Inhibitory protein) due to its role as a metastasis suppressor in cancer. PEBP1 was also demonstrated to be implicated in Alzheimers disease, diabetes and nephropathies. Furthermore, PEBP1 was described to be involved in many cellular processes, among them are signal transduction, inflammation, cell cycle, proliferation, adhesion, differentiation, apoptosis, autophagy, circadian rhythm and mitotic spindle checkpoint. On the molecular level, PEBP1 was shown to regulate several signaling pathways such as Raf/MEK/ERK, NFkB, PI3K/Akt/mTOR, p38, Notch and Wnt. PEBP1 acts by inhibiting most of the kinases of these signaling cascades. Moreover, PEBP1 is able to bind to a variety of small ligands such as ATP, phospholipids, nucleotides, flavonoids or drugs. Considering PEBP1 is a small cytoplasmic protein (21kDa), its involvement in so many diseases and cellular mechanisms is amazing. The aim of this review is to highlight the molecular systems that are common to all these cellular mechanisms in order to decipher the specific role of PEBP1. Recent discoveries enable us to propose that PEBP1 is a modulator of molecular interactions that control signal transduction during membrane and cytoskeleton reorganization. |
2007.06291 | Yoav Kolumbus | Yoav Kolumbus and Noam Nisan | On the Effectiveness of Tracking and Testing in SEIR Models | null | null | null | null | q-bio.PE cs.MA cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the effectiveness of tracking and testing in mitigating or
suppressing epidemic outbreaks, in combination with or as an alternative to
quarantines and global lockdowns. We study these intervention methods on a
network-based SEIR model, augmented with an additional probability to model
symptomatic, asymptomatic and pre-symptomatic cases. Our focus is on the basic
trade-offs between economic costs and human lives lost, and how these
trade-offs change under different lockdown, quarantine, tracking and testing
policies.
Our main findings are as follows: (i) Tests combined with patient quarantines
reduce both economic costs and mortality, but require a large-scale testing
capacity to achieve a significant improvement; (ii) Tracking significantly
reduces both economic costs and mortality; (iii) Tracking combined with a
limited number of tests can achieve containment without lockdowns; (iv) If
there is a small flow of new incoming infections, dynamic "On-Off" lockdowns
are more efficient than fixed lockdowns.
Our simulation results underline the extreme effectiveness of tracking and
testing policies in reducing both economic costs and mortality and their
potential to contain epidemic outbreaks without imposing social distancing
restrictions. This highlights the difficult social question of trading-off
these gains with the privacy loss that tracking necessarily entails.
| [
{
"created": "Mon, 13 Jul 2020 10:19:00 GMT",
"version": "v1"
}
] | 2020-07-15 | [
[
"Kolumbus",
"Yoav",
""
],
[
"Nisan",
"Noam",
""
]
] | We study the effectiveness of tracking and testing in mitigating or suppressing epidemic outbreaks, in combination with or as an alternative to quarantines and global lockdowns. We study these intervention methods on a network-based SEIR model, augmented with an additional probability to model symptomatic, asymptomatic and pre-symptomatic cases. Our focus is on the basic trade-offs between economic costs and human lives lost, and how these trade-offs change under different lockdown, quarantine, tracking and testing policies. Our main findings are as follows: (i) Tests combined with patient quarantines reduce both economic costs and mortality, but require a large-scale testing capacity to achieve a significant improvement; (ii) Tracking significantly reduces both economic costs and mortality; (iii) Tracking combined with a limited number of tests can achieve containment without lockdowns; (iv) If there is a small flow of new incoming infections, dynamic "On-Off" lockdowns are more efficient than fixed lockdowns. Our simulation results underline the extreme effectiveness of tracking and testing policies in reducing both economic costs and mortality and their potential to contain epidemic outbreaks without imposing social distancing restrictions. This highlights the difficult social question of trading-off these gains with the privacy loss that tracking necessarily entails. |
1604.07110 | Christopher Marriott | Chris Marriott and Jobran Chebib | Divergent Cumulative Cultural Evolution | 8 pages, ALIFE 2016 | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Divergent cumulative cultural evolution occurs when the cultural evolutionary
trajectory diverges from the biological evolutionary trajectory. We consider
the conditions under which divergent cumulative cultural evolution can occur.
We hypothesize that two conditions are necessary. First that genetic and
cultural information are stored separately in the agent. Second cultural
information must be transferred horizontally between agents of different
generations. We implement a model with these properties and show evidence of
divergent cultural evolution under both cooperative and competitive selection
pressures.
| [
{
"created": "Mon, 25 Apr 2016 02:24:57 GMT",
"version": "v1"
}
] | 2016-04-26 | [
[
"Marriott",
"Chris",
""
],
[
"Chebib",
"Jobran",
""
]
] | Divergent cumulative cultural evolution occurs when the cultural evolutionary trajectory diverges from the biological evolutionary trajectory. We consider the conditions under which divergent cumulative cultural evolution can occur. We hypothesize that two conditions are necessary. First that genetic and cultural information are stored separately in the agent. Second cultural information must be transferred horizontally between agents of different generations. We implement a model with these properties and show evidence of divergent cultural evolution under both cooperative and competitive selection pressures. |
2406.05738 | Thomas Le Menestrel | Thomas Le Menestrel, Manuel Rivas | Smiles2Dock: an open large-scale multi-task dataset for ML-based
molecular docking | null | null | null | null | q-bio.BM cs.LG stat.AP stat.CO | http://creativecommons.org/licenses/by/4.0/ | Docking is a crucial component in drug discovery aimed at predicting the
binding conformation and affinity between small molecules and target proteins.
ML-based docking has recently emerged as a prominent approach, outpacing
traditional methods like DOCK and AutoDock Vina in handling the growing scale
and complexity of molecular libraries. However, the availability of
comprehensive and user-friendly datasets for training and benchmarking ML-based
docking algorithms remains limited. We introduce Smiles2Dock, an open
large-scale multi-task dataset for molecular docking. We created a framework
combining P2Rank and AutoDock Vina to dock 1.7 million ligands from the ChEMBL
database against 15 AlphaFold proteins, giving us more than 25 million
protein-ligand binding scores. The dataset leverages a wide range of
high-accuracy AlphaFold protein models, encompasses a diverse set of
biologically relevant compounds and enables researchers to benchmark all major
approaches for ML-based docking such as Graph, Transformer and CNN-based
methods. We also introduce a novel Transformer-based architecture for docking
scores prediction and set it as an initial benchmark for our dataset. Our
dataset and code are publicly available to support the development of novel
ML-based methods for molecular docking to advance scientific research in this
field.
| [
{
"created": "Sun, 9 Jun 2024 11:13:03 GMT",
"version": "v1"
}
] | 2024-06-11 | [
[
"Menestrel",
"Thomas Le",
""
],
[
"Rivas",
"Manuel",
""
]
] | Docking is a crucial component in drug discovery aimed at predicting the binding conformation and affinity between small molecules and target proteins. ML-based docking has recently emerged as a prominent approach, outpacing traditional methods like DOCK and AutoDock Vina in handling the growing scale and complexity of molecular libraries. However, the availability of comprehensive and user-friendly datasets for training and benchmarking ML-based docking algorithms remains limited. We introduce Smiles2Dock, an open large-scale multi-task dataset for molecular docking. We created a framework combining P2Rank and AutoDock Vina to dock 1.7 million ligands from the ChEMBL database against 15 AlphaFold proteins, giving us more than 25 million protein-ligand binding scores. The dataset leverages a wide range of high-accuracy AlphaFold protein models, encompasses a diverse set of biologically relevant compounds and enables researchers to benchmark all major approaches for ML-based docking such as Graph, Transformer and CNN-based methods. We also introduce a novel Transformer-based architecture for docking scores prediction and set it as an initial benchmark for our dataset. Our dataset and code are publicly available to support the development of novel ML-based methods for molecular docking to advance scientific research in this field. |
1408.6187 | Jack Dekker | J. Dekker and J. Gilbert | Weedy adaptation in Setaria spp.: IX. Effects of salinity, temperature,
light and seed dormancy on Setaria faberi seed germination | 11 pages, 1 table | null | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Life in salty habitats is a function of tolerance to those chemicals at all
critical phases of a plant's life history. The ability to withstand salt as an
established plant may require different mechanisms and plant traits than those
needed to germinate in salty soils. Seeds establishing themselves in high salt
content may respond differently depending on the light conditions and seed
germinability at the time of salty water imbibition. S. faberi seed (and S.
viridis and S. pumila) plants were discovered thriving along the seacoasts of
Southern Japan. These plants possess the ability to after-ripen, germinate,
emerge and establish themselves, grow and reproduce in the salty soils and
salt-laden atmospheres present in these windy habitats. The objectives of this
paper are to determine the effect of salt (NaCl) in water imbibed by S. faberi
seed during after-ripening and germination, as well temperature and light.
Observations made also provide insights on the possible relationship between
salt and drought tolerance. Seed germination of all phenotypes inhibited by two
percent or more of NaCl. The effects of lesser amounts of NaCl on each of the
three phenotypes was highly dependent on the specific temperature and light
conditions. The three test phenotypes provided a good range to detect responses
to salinity, allowing the observation of both stimulatory and inhibitory
responses.
| [
{
"created": "Tue, 26 Aug 2014 17:30:07 GMT",
"version": "v1"
}
] | 2014-08-27 | [
[
"Dekker",
"J.",
""
],
[
"Gilbert",
"J.",
""
]
] | Life in salty habitats is a function of tolerance to those chemicals at all critical phases of a plant's life history. The ability to withstand salt as an established plant may require different mechanisms and plant traits than those needed to germinate in salty soils. Seeds establishing themselves in high salt content may respond differently depending on the light conditions and seed germinability at the time of salty water imbibition. S. faberi seed (and S. viridis and S. pumila) plants were discovered thriving along the seacoasts of Southern Japan. These plants possess the ability to after-ripen, germinate, emerge and establish themselves, grow and reproduce in the salty soils and salt-laden atmospheres present in these windy habitats. The objectives of this paper are to determine the effect of salt (NaCl) in water imbibed by S. faberi seed during after-ripening and germination, as well temperature and light. Observations made also provide insights on the possible relationship between salt and drought tolerance. Seed germination of all phenotypes inhibited by two percent or more of NaCl. The effects of lesser amounts of NaCl on each of the three phenotypes was highly dependent on the specific temperature and light conditions. The three test phenotypes provided a good range to detect responses to salinity, allowing the observation of both stimulatory and inhibitory responses. |
1807.07687 | Ursula Tooley | Ursula A. Tooley, Allyson P. Mackey, Rastko Ciric, Kosha Ruparel,
Tyler M. Moore, Ruben C. Gur, Raquel E. Gur, Theodore D. Satterthwaite,
Danielle S. Bassett | Influence of Neighborhood SES on Functional Brain Network Development | 9 pages, 6 figures. Cerebral Cortex (2019) | null | 10.1093/cercor/bhz066 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Higher socioeconomic status (SES) in childhood is associated with increased
cognitive abilities, higher academic achievement, and decreased incidence of
mental illness later in development. Accumulating evidence suggests that these
effects may be due to changes in brain development induced by environmental
factors. While prior work has mapped the associations between neighborhood SES
and brain structure, little is known about the relationship between SES and
intrinsic neural dynamics. Here, we capitalize upon a large community-based
sample (Philadelphia Neurodevelopmental Cohort, ages 8-22 years, n=1012) to
examine developmental changes in functional brain network topology as estimated
from resting state functional magnetic resonance imaging data. We
quantitatively characterize this topology using a local measure of network
segregation known as the clustering coefficient, and find that it accounts for
a greater degree of SES-associated variance than meso-scale segregation
captured by modularity. While whole-brain clustering increased with age,
high-SES youth displayed faster increases in clustering than low-SES youth, and
this effect was most pronounced for regions in the limbic, somatomotor, and
ventral attention systems. The effect of SES on developmental increases in
clustering was strongest for connections of intermediate physical length,
consistent with faster decreases in local connectivity in these regions in
low-SES youth, and tracked changes in BOLD signal complexity in the form of
regional homogeneity. Our findings suggest that neighborhood SES may
fundamentally alter intrinsic patterns of inter-regional interactions in the
human brain in a manner that is consistent with greater segregation of
information processing in late childhood and adolescence.
| [
{
"created": "Fri, 20 Jul 2018 01:32:22 GMT",
"version": "v1"
},
{
"created": "Mon, 23 Jul 2018 01:25:11 GMT",
"version": "v2"
}
] | 2019-04-17 | [
[
"Tooley",
"Ursula A.",
""
],
[
"Mackey",
"Allyson P.",
""
],
[
"Ciric",
"Rastko",
""
],
[
"Ruparel",
"Kosha",
""
],
[
"Moore",
"Tyler M.",
""
],
[
"Gur",
"Ruben C.",
""
],
[
"Gur",
"Raquel E.",
""
],
[
"Satterthwaite",
"Theodore D.",
""
],
[
"Bassett",
"Danielle S.",
""
]
] | Higher socioeconomic status (SES) in childhood is associated with increased cognitive abilities, higher academic achievement, and decreased incidence of mental illness later in development. Accumulating evidence suggests that these effects may be due to changes in brain development induced by environmental factors. While prior work has mapped the associations between neighborhood SES and brain structure, little is known about the relationship between SES and intrinsic neural dynamics. Here, we capitalize upon a large community-based sample (Philadelphia Neurodevelopmental Cohort, ages 8-22 years, n=1012) to examine developmental changes in functional brain network topology as estimated from resting state functional magnetic resonance imaging data. We quantitatively characterize this topology using a local measure of network segregation known as the clustering coefficient, and find that it accounts for a greater degree of SES-associated variance than meso-scale segregation captured by modularity. While whole-brain clustering increased with age, high-SES youth displayed faster increases in clustering than low-SES youth, and this effect was most pronounced for regions in the limbic, somatomotor, and ventral attention systems. The effect of SES on developmental increases in clustering was strongest for connections of intermediate physical length, consistent with faster decreases in local connectivity in these regions in low-SES youth, and tracked changes in BOLD signal complexity in the form of regional homogeneity. Our findings suggest that neighborhood SES may fundamentally alter intrinsic patterns of inter-regional interactions in the human brain in a manner that is consistent with greater segregation of information processing in late childhood and adolescence. |
2009.00539 | M. Ali Al-Radhawi | Jared Miller, M. Ali Al-Radhawi, and Eduardo D. Sontag | Mediating Ribosomal Competition by Splitting Pools | null | LCSS Vol 5 Issue 5 (Nov 2020) | 10.1109/LCSYS.2020.3041213 | null | q-bio.MN math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Synthetic biology constructs often rely upon the introduction of "circuit"
genes into host cells, in order to express novel proteins and thus endow the
host with a desired behavior. The expression of these new genes "consumes"
existing resources in the cell, such as ATP, RNA polymerase, amino acids, and
ribosomes. Ribosomal competition among strands of mRNA may be described by a
system of nonlinear ODEs called the Ribosomal Flow Model (RFM). The competition
for resources between host and circuit genes can be ameliorated by splitting
the ribosome pool by use of orthogonal ribosomes, where the circuit genes are
exclusively translated by mutated ribosomes. In this work, the RFM system is
extended to include orthogonal ribosome competition. This Orthogonal Ribosomal
Flow Model (ORFM) is proven to be stable through the use of Robust Lyapunov
Functions. The optimization problem of maximizing the weighted protein
translation rate by adjusting allocation of ribosomal species is formulated and
implemented.
| [
{
"created": "Tue, 1 Sep 2020 16:17:46 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Sep 2020 01:08:05 GMT",
"version": "v2"
},
{
"created": "Fri, 4 Sep 2020 17:34:44 GMT",
"version": "v3"
}
] | 2022-01-10 | [
[
"Miller",
"Jared",
""
],
[
"Al-Radhawi",
"M. Ali",
""
],
[
"Sontag",
"Eduardo D.",
""
]
] | Synthetic biology constructs often rely upon the introduction of "circuit" genes into host cells, in order to express novel proteins and thus endow the host with a desired behavior. The expression of these new genes "consumes" existing resources in the cell, such as ATP, RNA polymerase, amino acids, and ribosomes. Ribosomal competition among strands of mRNA may be described by a system of nonlinear ODEs called the Ribosomal Flow Model (RFM). The competition for resources between host and circuit genes can be ameliorated by splitting the ribosome pool by use of orthogonal ribosomes, where the circuit genes are exclusively translated by mutated ribosomes. In this work, the RFM system is extended to include orthogonal ribosome competition. This Orthogonal Ribosomal Flow Model (ORFM) is proven to be stable through the use of Robust Lyapunov Functions. The optimization problem of maximizing the weighted protein translation rate by adjusting allocation of ribosomal species is formulated and implemented. |
2401.10295 | Piyumi Chathurangika | Piumi Chathurangika, Sanjeewa Perera, Kushani De Silva | Forecasting dengue outbreaks with uncertainty using seasonal weather
patterns | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dengue is a vector-borne disease transmitted to humans by vectors of genus
Aedes and is a global threat with health, social, and economic impact in many
of the tropical countries including Sri Lanka. The virus transmission is
significantly impacted by environmental conditions, with a notable contribution
from elevated per-capita vector density. These conditions are dynamic in nature
and specially having the tropical climate, Sri Lanka experiences seasonal
weather patterns dominated by monsoons. In this work, we investigate the
dynamic influence of environmental conditions on dengue emergence in Colombo
district where dengue is extremely prevalent in Sri Lanka. A novel approach
leveraging the Markov chain Monte Carlo simulations has been employed to
identify seasonal patterns of dengue disease emergence, utilizing the dynamics
of weather patterns governing in the region. The newly developed algorithm
allows us to estimate the timing of dengue outbreaks with uncertainty, enabling
accurate forecasts of upcoming disease emergence patterns for better
preparedness.
| [
{
"created": "Thu, 18 Jan 2024 04:00:20 GMT",
"version": "v1"
}
] | 2024-01-22 | [
[
"Chathurangika",
"Piumi",
""
],
[
"Perera",
"Sanjeewa",
""
],
[
"De Silva",
"Kushani",
""
]
] | Dengue is a vector-borne disease transmitted to humans by vectors of genus Aedes and is a global threat with health, social, and economic impact in many of the tropical countries including Sri Lanka. The virus transmission is significantly impacted by environmental conditions, with a notable contribution from elevated per-capita vector density. These conditions are dynamic in nature and specially having the tropical climate, Sri Lanka experiences seasonal weather patterns dominated by monsoons. In this work, we investigate the dynamic influence of environmental conditions on dengue emergence in Colombo district where dengue is extremely prevalent in Sri Lanka. A novel approach leveraging the Markov chain Monte Carlo simulations has been employed to identify seasonal patterns of dengue disease emergence, utilizing the dynamics of weather patterns governing in the region. The newly developed algorithm allows us to estimate the timing of dengue outbreaks with uncertainty, enabling accurate forecasts of upcoming disease emergence patterns for better preparedness. |
1611.05443 | Francesco Alderisio | Chao Zhai, Michael Z. Q. Chen, Francesco Alderisio, Alexei Yu.
Uteshev, Mario di Bernardo | Bridging the Gap between Individuality and Joint Improvisation in the
Mirror Game | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extensive experiments in Human Movement Science suggest that solo motions are
characterized by unique features that define the individuality or motor
signature of people. While interacting with others, humans tend to
spontaneously coordinate their movement and unconsciously give rise to joint
improvisation. However, it has yet to be shed light on the relationship between
individuality and joint improvisation. By means of an ad-hoc virtual agent, in
this work we uncover the internal mechanisms of the transition from solo to
joint improvised motion in the mirror game, a simple yet effective paradigm for
studying interpersonal human coordination. According to the analysis of
experimental data, normalized segments of velocity in solo motion are regarded
as individual motor signature, and the existence of velocity segments
possessing a prescribed signature is theoretically guaranteed. In this work, we
first develop a systematic approach based on velocity segments to generate
\emph{in-silico} trajectories of a given human participant playing solo. Then
we present an online algorithm for the virtual player to produce joint
improvised motion with another agent while exhibiting some desired kinematic
characteristics, and to account for movement coordination and mutual adaptation
during joint action tasks. Finally, we demonstrate that the proposed approach
succeeds in revealing the kinematic features transition from solo to joint
improvised motions, thus revealing the existence of a tight relationship
between individuality and joint improvisation.
| [
{
"created": "Wed, 16 Nov 2016 14:27:12 GMT",
"version": "v1"
}
] | 2016-11-18 | [
[
"Zhai",
"Chao",
""
],
[
"Chen",
"Michael Z. Q.",
""
],
[
"Alderisio",
"Francesco",
""
],
[
"Uteshev",
"Alexei Yu.",
""
],
[
"di Bernardo",
"Mario",
""
]
] | Extensive experiments in Human Movement Science suggest that solo motions are characterized by unique features that define the individuality or motor signature of people. While interacting with others, humans tend to spontaneously coordinate their movement and unconsciously give rise to joint improvisation. However, it has yet to be shed light on the relationship between individuality and joint improvisation. By means of an ad-hoc virtual agent, in this work we uncover the internal mechanisms of the transition from solo to joint improvised motion in the mirror game, a simple yet effective paradigm for studying interpersonal human coordination. According to the analysis of experimental data, normalized segments of velocity in solo motion are regarded as individual motor signature, and the existence of velocity segments possessing a prescribed signature is theoretically guaranteed. In this work, we first develop a systematic approach based on velocity segments to generate \emph{in-silico} trajectories of a given human participant playing solo. Then we present an online algorithm for the virtual player to produce joint improvised motion with another agent while exhibiting some desired kinematic characteristics, and to account for movement coordination and mutual adaptation during joint action tasks. Finally, we demonstrate that the proposed approach succeeds in revealing the kinematic features transition from solo to joint improvised motions, thus revealing the existence of a tight relationship between individuality and joint improvisation. |
1806.00975 | Louis Kang | Louis Kang, Vijay Balasubramanian | A geometric attractor mechanism for self-organization of entorhinal grid
modules | Main text, Supplementary Information and Figures, Supplementary Video | null | null | null | q-bio.NC cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Grid cells in the medial entorhinal cortex (MEC) respond when an animal
occupies a periodic lattice of "grid fields" in the environment. The grids are
organized in modules with spatial periods, or scales, clustered around discrete
values separated by ratios in the range 1.2--2.0. We propose a mechanism that
produces this modular structure through dynamical self-organization in the MEC.
In attractor network models of grid formation, the grid scale of a single
module is set by the distance of recurrent inhibition between neurons. We show
that the MEC forms a hierarchy of discrete modules if a smooth increase in
inhibition distance along its dorso-ventral axis is accompanied by excitatory
interactions along this axis. Moreover, constant scale ratios between
successive modules arise through geometric relationships between triangular
grids and have values that fall within the observed range. We discuss how
interactions required by our model might be tested experimentally.
| [
{
"created": "Mon, 4 Jun 2018 06:38:58 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Mar 2019 18:08:42 GMT",
"version": "v2"
}
] | 2019-03-13 | [
[
"Kang",
"Louis",
""
],
[
"Balasubramanian",
"Vijay",
""
]
] | Grid cells in the medial entorhinal cortex (MEC) respond when an animal occupies a periodic lattice of "grid fields" in the environment. The grids are organized in modules with spatial periods, or scales, clustered around discrete values separated by ratios in the range 1.2--2.0. We propose a mechanism that produces this modular structure through dynamical self-organization in the MEC. In attractor network models of grid formation, the grid scale of a single module is set by the distance of recurrent inhibition between neurons. We show that the MEC forms a hierarchy of discrete modules if a smooth increase in inhibition distance along its dorso-ventral axis is accompanied by excitatory interactions along this axis. Moreover, constant scale ratios between successive modules arise through geometric relationships between triangular grids and have values that fall within the observed range. We discuss how interactions required by our model might be tested experimentally. |
2305.13127 | Junwei Kuang | Junwei Kuang, Jiaheng Xie and Zhijun Yan | What Symptoms and How Long? An Interpretable AI Approach for Depression
Detection in Social Media | 56 pages, 10 figures, 21 tables | null | null | null | q-bio.QM cs.AI cs.CY cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Depression is the most prevalent and serious mental illness, which induces
grave financial and societal ramifications. Depression detection is key for
early intervention to mitigate those consequences. Such a high-stake decision
inherently necessitates interpretability. Although a few depression detection
studies attempt to explain the decision based on the importance score or
attention weights, these explanations misalign with the clinical depression
diagnosis criterion that is based on depressive symptoms. To fill this gap, we
follow the computational design science paradigm to develop a novel Multi-Scale
Temporal Prototype Network (MSTPNet). MSTPNet innovatively detects and
interprets depressive symptoms as well as how long they last. Extensive
empirical analyses using a large-scale dataset show that MSTPNet outperforms
state-of-the-art depression detection methods with an F1-score of 0.851. This
result also reveals new symptoms that are unnoted in the survey approach, such
as sharing admiration for a different life. We further conduct a user study to
demonstrate its superiority over the benchmarks in interpretability. This study
contributes to IS literature with a novel interpretable deep learning model for
depression detection in social media. In practice, our proposed method can be
implemented in social media platforms to provide personalized online resources
for detected depressed patients.
| [
{
"created": "Thu, 18 May 2023 20:15:04 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Jul 2023 01:54:26 GMT",
"version": "v2"
}
] | 2023-07-26 | [
[
"Kuang",
"Junwei",
""
],
[
"Xie",
"Jiaheng",
""
],
[
"Yan",
"Zhijun",
""
]
] | Depression is the most prevalent and serious mental illness, which induces grave financial and societal ramifications. Depression detection is key for early intervention to mitigate those consequences. Such a high-stake decision inherently necessitates interpretability. Although a few depression detection studies attempt to explain the decision based on the importance score or attention weights, these explanations misalign with the clinical depression diagnosis criterion that is based on depressive symptoms. To fill this gap, we follow the computational design science paradigm to develop a novel Multi-Scale Temporal Prototype Network (MSTPNet). MSTPNet innovatively detects and interprets depressive symptoms as well as how long they last. Extensive empirical analyses using a large-scale dataset show that MSTPNet outperforms state-of-the-art depression detection methods with an F1-score of 0.851. This result also reveals new symptoms that are unnoted in the survey approach, such as sharing admiration for a different life. We further conduct a user study to demonstrate its superiority over the benchmarks in interpretability. This study contributes to IS literature with a novel interpretable deep learning model for depression detection in social media. In practice, our proposed method can be implemented in social media platforms to provide personalized online resources for detected depressed patients. |
1807.09617 | Soham Chatterjee Mr. | Soham Chatterjee, Archana Iyer, Satya Avva, Abhai Kollara, Malaikannan
Sankarasubbu | Convolutional Neural Networks In Classifying Cancer Through DNA
Methylation | null | null | null | null | q-bio.GN cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | DNA Methylation has been the most extensively studied epigenetic mark.
Usually a change in the genotype, DNA sequence, leads to a change in the
phenotype, observable characteristics of the individual. But DNA methylation,
which happens in the context of CpG (cytosine and guanine bases linked by
phosphate backbone) dinucleotides, does not lead to a change in the original
DNA sequence but has the potential to change the phenotype. DNA methylation is
implicated in various biological processes and diseases including cancer. Hence
there is a strong interest in understanding the DNA methylation patterns across
various epigenetic related ailments in order to distinguish and diagnose the
type of disease in its early stages. In this work, the relationship between
methylated versus unmethylated CpG regions and cancer types is explored using
Convolutional Neural Networks (CNNs). A CNN based Deep Learning model that can
classify the cancer of a new DNA methylation profile based on the learning from
publicly available DNA methylation datasets is then proposed.
| [
{
"created": "Tue, 24 Jul 2018 17:11:58 GMT",
"version": "v1"
}
] | 2018-07-26 | [
[
"Chatterjee",
"Soham",
""
],
[
"Iyer",
"Archana",
""
],
[
"Avva",
"Satya",
""
],
[
"Kollara",
"Abhai",
""
],
[
"Sankarasubbu",
"Malaikannan",
""
]
] | DNA Methylation has been the most extensively studied epigenetic mark. Usually a change in the genotype, DNA sequence, leads to a change in the phenotype, observable characteristics of the individual. But DNA methylation, which happens in the context of CpG (cytosine and guanine bases linked by phosphate backbone) dinucleotides, does not lead to a change in the original DNA sequence but has the potential to change the phenotype. DNA methylation is implicated in various biological processes and diseases including cancer. Hence there is a strong interest in understanding the DNA methylation patterns across various epigenetic related ailments in order to distinguish and diagnose the type of disease in its early stages. In this work, the relationship between methylated versus unmethylated CpG regions and cancer types is explored using Convolutional Neural Networks (CNNs). A CNN based Deep Learning model that can classify the cancer of a new DNA methylation profile based on the learning from publicly available DNA methylation datasets is then proposed. |
1301.7277 | Samuel Ocko | Samuel A. Ocko, L. Mahadevan | Collective thermoregulation in bee clusters | 19 pages, 8 figures | null | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Swarming is an essential part of honeybee behaviour, wherein thousands of
bees cling onto each other to form a dense cluster that may be exposed to the
environment for several days. This cluster has the ability to maintain its core
temperature actively without a central controller, and raises the question of
how this is achieved. We suggest that the swarm cluster is akin to an active
porous structure whose functional requirement is to adjust to outside
conditions by varying its porosity to control its core temperature. Using a
continuum model that takes the form of a set of advection-diffusion equations
for heat transfer in a mobile porous medium, we show that the equalization of
an effective "behavioural pressure", which propagates information about the
ambient temperature through variations in density, leads to effective
thermoregulation. Our model extends and generalizes previous models by focusing
the question of mechanism on the form and role of the behavioural pressure, and
allows us to explain the vertical asymmetry of the cluster (as a consequence of
buoyancy driven flows), the ability of the cluster to overpack at low ambient
temperatures without breaking up at high ambient temperatures, and the relative
insensitivity to large variations in the ambient temperature. Finally, our
theory makes testable hypotheses for how the cluster bee density should respond
to externally imposed temperature inhomogeneities, and suggests strategies for
biomimetic thermoregulation.
| [
{
"created": "Wed, 30 Jan 2013 16:32:05 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Nov 2013 20:05:59 GMT",
"version": "v2"
}
] | 2013-11-07 | [
[
"Ocko",
"Samuel A.",
""
],
[
"Mahadevan",
"L.",
""
]
] | Swarming is an essential part of honeybee behaviour, wherein thousands of bees cling onto each other to form a dense cluster that may be exposed to the environment for several days. This cluster has the ability to maintain its core temperature actively without a central controller, and raises the question of how this is achieved. We suggest that the swarm cluster is akin to an active porous structure whose functional requirement is to adjust to outside conditions by varying its porosity to control its core temperature. Using a continuum model that takes the form of a set of advection-diffusion equations for heat transfer in a mobile porous medium, we show that the equalization of an effective "behavioural pressure", which propagates information about the ambient temperature through variations in density, leads to effective thermoregulation. Our model extends and generalizes previous models by focusing the question of mechanism on the form and role of the behavioural pressure, and allows us to explain the vertical asymmetry of the cluster (as a consequence of buoyancy driven flows), the ability of the cluster to overpack at low ambient temperatures without breaking up at high ambient temperatures, and the relative insensitivity to large variations in the ambient temperature. Finally, our theory makes testable hypotheses for how the cluster bee density should respond to externally imposed temperature inhomogeneities, and suggests strategies for biomimetic thermoregulation. |
1211.7330 | Alexander Stewart | Alexander J. Stewart and Joshua B. Plotkin | The evolution of complex gene regulation by low specificity binding
sites | null | Proc. R. Soc. B 7 October 2013 vol. 280 no. 1768 20131313 | 10.1098/rspb.2013.1313 | null | q-bio.PE q-bio.GN q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transcription factor binding sites vary in their specificity, both within and
between species. Binding specificity has a strong impact on the evolution of
gene expression, because it determines how easily regulatory interactions are
gained and lost. Nevertheless, we have a relatively poor understanding of what
evolutionary forces determine the specificity of binding sites. Here we address
this question by studying regulatory modules composed of multiple binding
sites. Using a population-genetic model, we show that more complex regulatory
modules, composed of a greater number of binding sites, must employ binding
sites that are individually less specific, compared to less complex regulatory
modules. This effect is extremely general, and it hold regardless of the
regulatory logic of a module. We attribute this phenomenon to the inability of
stabilising selection to maintain highly specific sites in large regulatory
modules. Our analysis helps to explain broad empirical trends in the yeast
regulatory network: those genes with a greater number of transcriptional
regulators feature by less specific binding sites, and there is less variance
in their specificity, compared to genes with fewer regulators. Likewise, our
results also help to explain the well-known trend towards lower specificity in
the transcription factor binding sites of higher eukaryotes, which perform
complex regulatory tasks, compared to prokaryotes.
| [
{
"created": "Fri, 30 Nov 2012 18:21:59 GMT",
"version": "v1"
}
] | 2013-12-30 | [
[
"Stewart",
"Alexander J.",
""
],
[
"Plotkin",
"Joshua B.",
""
]
] | Transcription factor binding sites vary in their specificity, both within and between species. Binding specificity has a strong impact on the evolution of gene expression, because it determines how easily regulatory interactions are gained and lost. Nevertheless, we have a relatively poor understanding of what evolutionary forces determine the specificity of binding sites. Here we address this question by studying regulatory modules composed of multiple binding sites. Using a population-genetic model, we show that more complex regulatory modules, composed of a greater number of binding sites, must employ binding sites that are individually less specific, compared to less complex regulatory modules. This effect is extremely general, and it hold regardless of the regulatory logic of a module. We attribute this phenomenon to the inability of stabilising selection to maintain highly specific sites in large regulatory modules. Our analysis helps to explain broad empirical trends in the yeast regulatory network: those genes with a greater number of transcriptional regulators feature by less specific binding sites, and there is less variance in their specificity, compared to genes with fewer regulators. Likewise, our results also help to explain the well-known trend towards lower specificity in the transcription factor binding sites of higher eukaryotes, which perform complex regulatory tasks, compared to prokaryotes. |
2001.07247 | Christopher Lester | Christopher Lester, Ruth E. Baker, Christian A. Yates | Efficiently simulating discrete-state models with binary decision trees | 26 pages | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stochastic simulation algorithms (SSAs) are widely used to numerically
investigate the properties of stochastic, discrete-state models. The Gillespie
Direct Method is the pre-eminent SSA, and is widely used to generate sample
paths of so-called agent-based or individual-based models. However, the
simplicity of the Gillespie Direct Method often renders it impractical where
large-scale models are to be analysed in detail. In this work, we carefully
modify the Gillespie Direct Method so that it uses a customised binary decision
tree to trace out sample paths of the model of interest. We show that a
decision tree can be constructed to exploit the specific features of the chosen
model. Specifically, the events that underpin the model are placed in
carefully-chosen leaves of the decision tree in order to minimise the work
required to keep the tree up-to-date. The computational efficencies that we
realise can provide the apparatus necessary for the investigation of
large-scale, discrete-state models that would otherwise be intractable. Two
case studies are presented to demonstrate the efficiency of the method.
| [
{
"created": "Mon, 20 Jan 2020 20:40:59 GMT",
"version": "v1"
}
] | 2020-01-22 | [
[
"Lester",
"Christopher",
""
],
[
"Baker",
"Ruth E.",
""
],
[
"Yates",
"Christian A.",
""
]
] | Stochastic simulation algorithms (SSAs) are widely used to numerically investigate the properties of stochastic, discrete-state models. The Gillespie Direct Method is the pre-eminent SSA, and is widely used to generate sample paths of so-called agent-based or individual-based models. However, the simplicity of the Gillespie Direct Method often renders it impractical where large-scale models are to be analysed in detail. In this work, we carefully modify the Gillespie Direct Method so that it uses a customised binary decision tree to trace out sample paths of the model of interest. We show that a decision tree can be constructed to exploit the specific features of the chosen model. Specifically, the events that underpin the model are placed in carefully-chosen leaves of the decision tree in order to minimise the work required to keep the tree up-to-date. The computational efficencies that we realise can provide the apparatus necessary for the investigation of large-scale, discrete-state models that would otherwise be intractable. Two case studies are presented to demonstrate the efficiency of the method. |
1810.11769 | Yuri A. Dabaghian | Y. Dabaghian | Through synapses to spatial memory maps: a topological model | 18 pages, 9 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Various neurophysiological and cognitive functions are based on transferring
information between spiking neurons via a complex system of synaptic
connections. In particular, the capacity of presynaptic inputs to influence the
postsynaptic outputs---the efficacy of the synapses---plays a principal role in
all aspects of hippocampal neurophysiology. However, a direct link between the
information processed at the level of individual synapses and the animal's
ability to form memories at the organismal level has not yet been fully
understood. Here, we investigate the effect of synaptic transmission
probabilities on the ability of the hippocampal place cell ensembles to produce
a cognitive map of the environment. Using methods from algebraic topology, we
find that weakening synaptic connections increase spatial learning times,
produce topological defects in the large-scale representation of the ambient
space and restrict the range of parameters for which place cell ensembles are
capable of producing a map with correct topological structure. On the other
hand, the results indicate a possibility of compensatory phenomena, namely that
spatial learning deficiencies may be mitigated through enhancement of neuronal
activity.
| [
{
"created": "Sun, 28 Oct 2018 07:05:16 GMT",
"version": "v1"
}
] | 2018-10-30 | [
[
"Dabaghian",
"Y.",
""
]
] | Various neurophysiological and cognitive functions are based on transferring information between spiking neurons via a complex system of synaptic connections. In particular, the capacity of presynaptic inputs to influence the postsynaptic outputs---the efficacy of the synapses---plays a principal role in all aspects of hippocampal neurophysiology. However, a direct link between the information processed at the level of individual synapses and the animal's ability to form memories at the organismal level has not yet been fully understood. Here, we investigate the effect of synaptic transmission probabilities on the ability of the hippocampal place cell ensembles to produce a cognitive map of the environment. Using methods from algebraic topology, we find that weakening synaptic connections increase spatial learning times, produce topological defects in the large-scale representation of the ambient space and restrict the range of parameters for which place cell ensembles are capable of producing a map with correct topological structure. On the other hand, the results indicate a possibility of compensatory phenomena, namely that spatial learning deficiencies may be mitigated through enhancement of neuronal activity. |
1107.1998 | Miloje Rakocevic M. | Miloje M. Rakocevic | Genetic Code: Four Diversity Types of Protein Amino Acids | 63 pages, 3 figures, 11 tables, 7 appendices | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents, for the first time, four diversity types of protein
amino acids. The first type includes two amino acids (G, P), both without
standard hydrocarbon side chains; the second one four amino acids, as two pairs
[(A, L), (V, I)], all with standard hydrocarbon side chains; the third type
comprises the six amino acids, as three pairs [(F, Y), (H, W), (C, M)], two
aromatic, two hetero aromatic and two "hetero" non-aromatic); finally, the
fourth type consists of eight amino acids, as four pairs [(S, T), (D, E), (N,
Q), (K, R)], all with a functional group which also exists in amino acid
functional group (wholly presented: H2N-\.CH-COOH; separately: OH, COOH, CONH2,
NH2). The insight into existence of four types of diversity was possible only
after an insight into the existence of some very new arithmetical regularities,
which were so far unknown. Also, as for showing these four types was necessary
to reveal the relationships between several key harmonic structures of the
genetic code (which we presented in our previous works), this paper is also a
review article of the author's researches of the genetic code. By this, the
review itself shows that the said harmonic structures are connected through the
same (or near the same) chemically determined amino acid pairs, 10 pairs out of
the 190 possible.
| [
{
"created": "Mon, 11 Jul 2011 11:16:59 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Jul 2011 08:18:30 GMT",
"version": "v2"
}
] | 2011-07-20 | [
[
"Rakocevic",
"Miloje M.",
""
]
] | This paper presents, for the first time, four diversity types of protein amino acids. The first type includes two amino acids (G, P), both without standard hydrocarbon side chains; the second one four amino acids, as two pairs [(A, L), (V, I)], all with standard hydrocarbon side chains; the third type comprises the six amino acids, as three pairs [(F, Y), (H, W), (C, M)], two aromatic, two hetero aromatic and two "hetero" non-aromatic); finally, the fourth type consists of eight amino acids, as four pairs [(S, T), (D, E), (N, Q), (K, R)], all with a functional group which also exists in amino acid functional group (wholly presented: H2N-\.CH-COOH; separately: OH, COOH, CONH2, NH2). The insight into existence of four types of diversity was possible only after an insight into the existence of some very new arithmetical regularities, which were so far unknown. Also, as for showing these four types was necessary to reveal the relationships between several key harmonic structures of the genetic code (which we presented in our previous works), this paper is also a review article of the author's researches of the genetic code. By this, the review itself shows that the said harmonic structures are connected through the same (or near the same) chemically determined amino acid pairs, 10 pairs out of the 190 possible. |
2211.12949 | Xiaoyuan Liu | Xiaoyuan Liu, George W.A. Constable, Jonathan W. Pitchford | Feasibility and stability in large Lotka Volterra systems with
interaction structure | Manuscript is 8 pages long, containing 4 figures. Pages 9 to 25 is
the Supplemental Material | null | 10.1103/PhysRevE.107.054301 | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Complex system stability can be studied via linear stability analysis using
Random Matrix Theory (RMT) or via feasibility (requiring positive equilibrium
abundances). Both approaches highlight the importance of interaction structure.
Here we show, analytically and numerically, how RMT and feasibility approaches
can be complementary. In generalised Lotka-Volterra (GLV) models with random
interaction matrices, feasibility increases when predator-prey interactions
increase; increasing competition/mutualism has the opposite effect. These
changes have crucial impact on the stability of the GLV model.
| [
{
"created": "Wed, 23 Nov 2022 13:51:52 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Apr 2023 08:35:42 GMT",
"version": "v2"
}
] | 2023-05-17 | [
[
"Liu",
"Xiaoyuan",
""
],
[
"Constable",
"George W. A.",
""
],
[
"Pitchford",
"Jonathan W.",
""
]
] | Complex system stability can be studied via linear stability analysis using Random Matrix Theory (RMT) or via feasibility (requiring positive equilibrium abundances). Both approaches highlight the importance of interaction structure. Here we show, analytically and numerically, how RMT and feasibility approaches can be complementary. In generalised Lotka-Volterra (GLV) models with random interaction matrices, feasibility increases when predator-prey interactions increase; increasing competition/mutualism has the opposite effect. These changes have crucial impact on the stability of the GLV model. |
1403.5185 | Jan Karbowski | Franciszek Rakowski, Jagan Srinivasan, Paul W. Sternberg, Jan
Karbowski | Synaptic polarity of the interneuron circuit controlling C. elegans
locomotion | Connectivity patterns of neural network controlling C. elegans motion | Front. Comput. Neurosci. 7: 128 (2013) | 10.3389/fncom.2013.00128 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | C. elegans is the only animal for which a detailed neural connectivity
diagram has been constructed. However, synaptic polarities in this diagram, and
thus, circuit functions are largely unknown. Here, we deciphered the likely
polarities of 7 pre-motor neurons implicated in the control of worm's
locomotion, using a combination of experimental and computational tools. We
performed single and multiple laser ablations in the locomotor interneuron
circuit and recorded times the worms spent in forward and backward locomotion.
We constructed a theoretical model of the locomotor circuit and searched its
all possible synaptic polarity combinations and sensory input patterns in order
to find the best match to the timing data. The optimal solution is when either
all or most of the interneurons are inhibitory and forward interneurons receive
the strongest input, which suggests that inhibition governs the dynamics of the
locomotor interneuron circuit. From the five pre-motor interneurons, only AVB
and AVD are equally likely to be excitatory, i.e. they have probably similar
number of inhibitory and excitatory connections to distant targets. The method
used here has a general character and thus can be also applied to other neural
systems consisting of small functional networks.
| [
{
"created": "Thu, 20 Mar 2014 16:00:56 GMT",
"version": "v1"
}
] | 2014-03-21 | [
[
"Rakowski",
"Franciszek",
""
],
[
"Srinivasan",
"Jagan",
""
],
[
"Sternberg",
"Paul W.",
""
],
[
"Karbowski",
"Jan",
""
]
] | C. elegans is the only animal for which a detailed neural connectivity diagram has been constructed. However, synaptic polarities in this diagram, and thus, circuit functions are largely unknown. Here, we deciphered the likely polarities of 7 pre-motor neurons implicated in the control of worm's locomotion, using a combination of experimental and computational tools. We performed single and multiple laser ablations in the locomotor interneuron circuit and recorded times the worms spent in forward and backward locomotion. We constructed a theoretical model of the locomotor circuit and searched its all possible synaptic polarity combinations and sensory input patterns in order to find the best match to the timing data. The optimal solution is when either all or most of the interneurons are inhibitory and forward interneurons receive the strongest input, which suggests that inhibition governs the dynamics of the locomotor interneuron circuit. From the five pre-motor interneurons, only AVB and AVD are equally likely to be excitatory, i.e. they have probably similar number of inhibitory and excitatory connections to distant targets. The method used here has a general character and thus can be also applied to other neural systems consisting of small functional networks. |
0706.2007 | Ilya M. Nemenman | Ilya Nemenman, G. Sean Escola, William S. Hlavacek, Pat J. Unkefer,
Clifford J. Unkefer, Michael E. Wall | Reconstruction of metabolic networks from high-throughput metabolite
profiling data: in silico analysis of red blood cell metabolism | 14 pages, 3 figures. Presented at the DIMACS Workshop on Dialogue on
Reverse Engineering Assessment and Methods (DREAM), Sep 2006 | Ann. N.Y. Acad. Sci. 1115: 102\^a?"115 (2007) | 10.1196/annals.1407.013 | LANL LA-UR-07-3646 | q-bio.MN | null | We investigate the ability of algorithms developed for reverse engineering of
transcriptional regulatory networks to reconstruct metabolic networks from
high-throughput metabolite profiling data. For this, we generate synthetic
metabolic profiles for benchmarking purposes based on a well-established model
for red blood cell metabolism. A variety of data sets is generated, accounting
for different properties of real metabolic networks, such as experimental
noise, metabolite correlations, and temporal dynamics. These data sets are made
available online. We apply ARACNE, a mainstream transcriptional networks
reverse engineering algorithm, to these data sets and observe performance
comparable to that obtained in the transcriptional domain, for which the
algorithm was originally designed.
| [
{
"created": "Wed, 13 Jun 2007 22:41:36 GMT",
"version": "v1"
}
] | 2007-11-19 | [
[
"Nemenman",
"Ilya",
""
],
[
"Escola",
"G. Sean",
""
],
[
"Hlavacek",
"William S.",
""
],
[
"Unkefer",
"Pat J.",
""
],
[
"Unkefer",
"Clifford J.",
""
],
[
"Wall",
"Michael E.",
""
]
] | We investigate the ability of algorithms developed for reverse engineering of transcriptional regulatory networks to reconstruct metabolic networks from high-throughput metabolite profiling data. For this, we generate synthetic metabolic profiles for benchmarking purposes based on a well-established model for red blood cell metabolism. A variety of data sets is generated, accounting for different properties of real metabolic networks, such as experimental noise, metabolite correlations, and temporal dynamics. These data sets are made available online. We apply ARACNE, a mainstream transcriptional networks reverse engineering algorithm, to these data sets and observe performance comparable to that obtained in the transcriptional domain, for which the algorithm was originally designed. |
1312.4576 | Vipul Periwal | Zeina Shreif, Deborah A. Striegel, and Vipul Periwal | The jigsaw puzzle of sequence phenotype inference: Piecing together
Shannon entropy, importance sampling, and Empirical Bayes | 61 pages | null | 10.1016/j.jtbi.2015.06.010 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A nucleotide sequence 35 base pairs long can take
1,180,591,620,717,411,303,424 possible values. An example of systems biology
datasets, protein binding microarrays, contain activity data from about 40000
such sequences. The discrepancy between the number of possible configurations
and the available activities is enormous. Thus, albeit that systems biology
datasets are large in absolute terms, they oftentimes require methods developed
for rare events due to the combinatorial increase in the number of possible
configurations of biological systems. A plethora of techniques for handling
large datasets, such as Empirical Bayes, or rare events, such as importance
sampling, have been developed in the literature, but these cannot always be
simultaneously utilized. Here we introduce a principled approach to Empirical
Bayes based on importance sampling, information theory, and theoretical physics
in the general context of sequence phenotype model induction. We present the
analytical calculations that underlie our approach. We demonstrate the
computational efficiency of the approach on concrete examples, and demonstrate
its efficacy by applying the theory to publicly available protein binding
microarray transcription factor datasets and to data on synthetic
cAMP-regulated enhancer sequences. As further demonstrations, we find
transcription factor binding motifs, predict the activity of new sequences and
extract the locations of transcription factor binding sites. In summary, we
present a novel method that is efficient (requiring minimal computational time
and reasonable amounts of memory), has high predictive power that is comparable
with that of models with hundreds of parameters, and has a limited number of
optimized parameters, proportional to the sequence length.
| [
{
"created": "Mon, 16 Dec 2013 21:55:10 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Jun 2015 12:40:48 GMT",
"version": "v2"
}
] | 2015-06-15 | [
[
"Shreif",
"Zeina",
""
],
[
"Striegel",
"Deborah A.",
""
],
[
"Periwal",
"Vipul",
""
]
] | A nucleotide sequence 35 base pairs long can take 1,180,591,620,717,411,303,424 possible values. An example of systems biology datasets, protein binding microarrays, contain activity data from about 40000 such sequences. The discrepancy between the number of possible configurations and the available activities is enormous. Thus, albeit that systems biology datasets are large in absolute terms, they oftentimes require methods developed for rare events due to the combinatorial increase in the number of possible configurations of biological systems. A plethora of techniques for handling large datasets, such as Empirical Bayes, or rare events, such as importance sampling, have been developed in the literature, but these cannot always be simultaneously utilized. Here we introduce a principled approach to Empirical Bayes based on importance sampling, information theory, and theoretical physics in the general context of sequence phenotype model induction. We present the analytical calculations that underlie our approach. We demonstrate the computational efficiency of the approach on concrete examples, and demonstrate its efficacy by applying the theory to publicly available protein binding microarray transcription factor datasets and to data on synthetic cAMP-regulated enhancer sequences. As further demonstrations, we find transcription factor binding motifs, predict the activity of new sequences and extract the locations of transcription factor binding sites. In summary, we present a novel method that is efficient (requiring minimal computational time and reasonable amounts of memory), has high predictive power that is comparable with that of models with hundreds of parameters, and has a limited number of optimized parameters, proportional to the sequence length. |
1812.11758 | Timothy O'Leary | Dhruva V Raman, Timothy O'Leary | Fundamental bounds on learning performance in neural circuits | null | Proceedings of the National Academy of Sciences May 2019,
201813416 | 10.1073/pnas.1813416116 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How does the size of a neural circuit influence its learning performance?
Intuitively, we expect the learning capacity of a neural circuit to grow with
the number of neurons and synapses. Larger brains tend to be found in species
with higher cognitive function and learning ability. Similarly, adding
connections and units to artificial neural networks can allow them to solve
more complex tasks. However, we show that in a biologically relevant setting
where synapses introduce an unavoidable amount of noise, there is an optimal
size of network for a given task. Beneath this optimal size, our analysis shows
how adding apparently redundant neurons and connections can make tasks more
learnable. Therefore large neural circuits can either devote connectivity to
generating complex behaviors, or exploit this connectivity to achieve faster
and more precise learning of simpler behaviors. Above the optimal network size,
the addition of neurons and synaptic connections starts to impede learning
performance. This suggests that overall brain size may be constrained by the
need to learn efficiently with unreliable synapses, and may explain why some
neurological learning deficits are associated with hyperconnectivity. Our
analysis is independent of specific learning rules and uncovers fundamental
relationships between learning rate, task performance, network size and
intrinsic noise in neural circuits.
| [
{
"created": "Mon, 31 Dec 2018 10:59:17 GMT",
"version": "v1"
}
] | 2019-05-09 | [
[
"Raman",
"Dhruva V",
""
],
[
"O'Leary",
"Timothy",
""
]
] | How does the size of a neural circuit influence its learning performance? Intuitively, we expect the learning capacity of a neural circuit to grow with the number of neurons and synapses. Larger brains tend to be found in species with higher cognitive function and learning ability. Similarly, adding connections and units to artificial neural networks can allow them to solve more complex tasks. However, we show that in a biologically relevant setting where synapses introduce an unavoidable amount of noise, there is an optimal size of network for a given task. Beneath this optimal size, our analysis shows how adding apparently redundant neurons and connections can make tasks more learnable. Therefore large neural circuits can either devote connectivity to generating complex behaviors, or exploit this connectivity to achieve faster and more precise learning of simpler behaviors. Above the optimal network size, the addition of neurons and synaptic connections starts to impede learning performance. This suggests that overall brain size may be constrained by the need to learn efficiently with unreliable synapses, and may explain why some neurological learning deficits are associated with hyperconnectivity. Our analysis is independent of specific learning rules and uncovers fundamental relationships between learning rate, task performance, network size and intrinsic noise in neural circuits. |
2303.04285 | Sergey Shuvaev | Sergey Shuvaev, Evgeny Amelchenko, Dmitry Smagin, Natalia
Kudryavtseva, Grigori Enikolopov, Alexei Koulakov | A normative theory of social conflict | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Social hierarchy in animal groups carries a crucial adaptive function by
reducing conflict and injury while protecting valuable group resources. Social
hierarchy is dynamic and can be altered by social conflict, agonistic
interactions, and aggression. Understanding social conflict and aggressive
behavior is of profound importance to our society and welfare. In this study,
we developed a quantitative theory of social conflict. We modeled individual
agonistic interactions as a normal-form game between two agents. We assumed
that the agents use Bayesian inference to update their beliefs about their
strength or their opponent's strength and to derive optimal actions. We
compared the results of our model to behavioral and whole-brain neural activity
data obtained for a large (n=116) population of mice engaged in agonistic
interactions. We find that both types of data are consistent with the
first-level Theory of Mind model (1-ToM) in which mice form both "primary"
beliefs about their and their opponent's strengths as well as the "secondary"
beliefs about the beliefs of their opponents. Our model helps identify brain
regions that carry information about these levels of beliefs. Overall, we both
propose a model to describe agonistic interactions and support our quantitative
results with behavioral and neural activity data.
| [
{
"created": "Tue, 7 Mar 2023 23:16:44 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Apr 2023 17:23:21 GMT",
"version": "v2"
}
] | 2023-04-27 | [
[
"Shuvaev",
"Sergey",
""
],
[
"Amelchenko",
"Evgeny",
""
],
[
"Smagin",
"Dmitry",
""
],
[
"Kudryavtseva",
"Natalia",
""
],
[
"Enikolopov",
"Grigori",
""
],
[
"Koulakov",
"Alexei",
""
]
] | Social hierarchy in animal groups carries a crucial adaptive function by reducing conflict and injury while protecting valuable group resources. Social hierarchy is dynamic and can be altered by social conflict, agonistic interactions, and aggression. Understanding social conflict and aggressive behavior is of profound importance to our society and welfare. In this study, we developed a quantitative theory of social conflict. We modeled individual agonistic interactions as a normal-form game between two agents. We assumed that the agents use Bayesian inference to update their beliefs about their strength or their opponent's strength and to derive optimal actions. We compared the results of our model to behavioral and whole-brain neural activity data obtained for a large (n=116) population of mice engaged in agonistic interactions. We find that both types of data are consistent with the first-level Theory of Mind model (1-ToM) in which mice form both "primary" beliefs about their and their opponent's strengths as well as the "secondary" beliefs about the beliefs of their opponents. Our model helps identify brain regions that carry information about these levels of beliefs. Overall, we both propose a model to describe agonistic interactions and support our quantitative results with behavioral and neural activity data. |
2106.04123 | Giorgio Papitto | Giorgio Papitto, Luisa Lugli, Anna M. Borghi, Antonello Pellicano and
Ferdinand Binkofski | Embodied negation and levels of concreteness: A TMS Study on German and
Italian language processing | 30 pages, 3 figures, 1 table, research paper | null | 10.1016/j.brainres.2021.147523 | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | According to the embodied cognition perspective, linguistic negation may
block the motor simulations induced by language processing. Transcranial
magnetic stimulation (TMS) was applied to the left primary motor cortex (hand
area) of monolingual Italian and German healthy participants during a rapid
serial visual presentation of sentences from their own language. In these
languages, the negative particle is located at the beginning and at the end of
the sentence, respectively. The study investigated whether the interruption of
the motor simulation processes, accounted for by reduced motor evoked
potentials (MEPs), takes place similarly in two languages differing on the
position of the negative marker. Different levels of sentence concreteness were
also manipulated to investigate if negation exerts generalized effects or if it
is affected by the semantic features of the sentence. Our findings indicate
that negation acts as a block on motor representations, but independently from
the language and words concreteness level.
| [
{
"created": "Tue, 8 Jun 2021 06:19:47 GMT",
"version": "v1"
}
] | 2021-06-09 | [
[
"Papitto",
"Giorgio",
""
],
[
"Lugli",
"Luisa",
""
],
[
"Borghi",
"Anna M.",
""
],
[
"Pellicano",
"Antonello",
""
],
[
"Binkofski",
"Ferdinand",
""
]
] | According to the embodied cognition perspective, linguistic negation may block the motor simulations induced by language processing. Transcranial magnetic stimulation (TMS) was applied to the left primary motor cortex (hand area) of monolingual Italian and German healthy participants during a rapid serial visual presentation of sentences from their own language. In these languages, the negative particle is located at the beginning and at the end of the sentence, respectively. The study investigated whether the interruption of the motor simulation processes, accounted for by reduced motor evoked potentials (MEPs), takes place similarly in two languages differing on the position of the negative marker. Different levels of sentence concreteness were also manipulated to investigate if negation exerts generalized effects or if it is affected by the semantic features of the sentence. Our findings indicate that negation acts as a block on motor representations, but independently from the language and words concreteness level. |
1503.04978 | Daniel Rabosky | Daniel L. Rabosky | No substitute for real data: phylogenies from birth-death polytomy
resolvers should not be used for many downstream comparative analyses | null | null | null | null | q-bio.QM q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The statistical estimation of phylogenies is always associated with
uncertainty, and accommodating this uncertainty is an important component of
modern phylogenetic comparative analysis. The birth-death polytomy resolver is
a method of accounting for phylogenetic uncertainty that places missing
(unsampled) taxa onto phylogenetic trees, using taxonomic information alone.
Recent studies of birds and mammals have used this approach to generate
pseudo-posterior distributions of phylogenetic trees that are complete at the
species level, even in the absence of genetic data for many species. Many
researchers have used these distributions of phylogenies for downstream
evolutionary analyses that involve inferences on phenotypic evolution,
geography, and community assembly. I demonstrate that the use of phylogenies
constructed in this fashion is inappropriate for many questions involving
traits. Because species are placed on trees at random with respect to trait
values, the birth-death polytomy resolver breaks down natural patterns of trait
phylogenetic structure. Inferences based on these trees are predictably and
often drastically biased in a direction that depends on the underlying (true)
pattern of phylogenetic structure in traits. I illustrate the severity of the
phenomenon for both continuous and discrete traits using examples from a global
bird phylogeny.
| [
{
"created": "Tue, 17 Mar 2015 10:23:01 GMT",
"version": "v1"
}
] | 2015-03-18 | [
[
"Rabosky",
"Daniel L.",
""
]
] | The statistical estimation of phylogenies is always associated with uncertainty, and accommodating this uncertainty is an important component of modern phylogenetic comparative analysis. The birth-death polytomy resolver is a method of accounting for phylogenetic uncertainty that places missing (unsampled) taxa onto phylogenetic trees, using taxonomic information alone. Recent studies of birds and mammals have used this approach to generate pseudo-posterior distributions of phylogenetic trees that are complete at the species level, even in the absence of genetic data for many species. Many researchers have used these distributions of phylogenies for downstream evolutionary analyses that involve inferences on phenotypic evolution, geography, and community assembly. I demonstrate that the use of phylogenies constructed in this fashion is inappropriate for many questions involving traits. Because species are placed on trees at random with respect to trait values, the birth-death polytomy resolver breaks down natural patterns of trait phylogenetic structure. Inferences based on these trees are predictably and often drastically biased in a direction that depends on the underlying (true) pattern of phylogenetic structure in traits. I illustrate the severity of the phenomenon for both continuous and discrete traits using examples from a global bird phylogeny. |
1508.05717 | Ovidiu Radulescu | Satya Swarup Samal, Dima Grigoriev, Holger Fr\"ohlich and Ovidiu
Radulescu | Analysis of Reaction Network Systems Using Tropical Geometry | Proceedings Computer Algebra in Scientific Computing CASC 2015 | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We discuss a novel analysis method for reaction network systems with
polynomial or rational rate functions. This method is based on computing
tropical equilibrations defined by the equality of at least two dominant
monomials of opposite signs in the differential equations of each dynamic
variable. In algebraic geometry, the tropical equilibration problem is
tantamount to finding tropical prevarieties, that are finite intersections of
tropical hypersurfaces. Tropical equilibrations with the same set of dominant
monomials define a branch or equivalence class. Minimal branches are
particularly interesting as they describe the simplest states of the reaction
network. We provide a method to compute the number of minimal branches and to
find representative tropical equilibrations for each branch.
| [
{
"created": "Mon, 24 Aug 2015 08:24:56 GMT",
"version": "v1"
}
] | 2015-08-25 | [
[
"Samal",
"Satya Swarup",
""
],
[
"Grigoriev",
"Dima",
""
],
[
"Fröhlich",
"Holger",
""
],
[
"Radulescu",
"Ovidiu",
""
]
] | We discuss a novel analysis method for reaction network systems with polynomial or rational rate functions. This method is based on computing tropical equilibrations defined by the equality of at least two dominant monomials of opposite signs in the differential equations of each dynamic variable. In algebraic geometry, the tropical equilibration problem is tantamount to finding tropical prevarieties, that are finite intersections of tropical hypersurfaces. Tropical equilibrations with the same set of dominant monomials define a branch or equivalence class. Minimal branches are particularly interesting as they describe the simplest states of the reaction network. We provide a method to compute the number of minimal branches and to find representative tropical equilibrations for each branch. |
1903.07885 | Jannes Jegminat | Jannes Jegminat, Maya Jastrzebowska, Matt Pachai, Michael Herzog,
Jean-Pascal Pfister | Bayesian regression explains how human participants handle parameter
uncertainty | null | null | 10.1371/journal.pcbi.1007886 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The human brain copes with sensory uncertainty in accordance with Bayes'
rule. However, it is unknown how the brain makes predictions in the presence of
parameter uncertainty. Here, we tested whether and how humans take parameter
uncertainty into account in a regression task. Participants extrapolated a
parabola from a limited number of noisy points, shown on a computer screen. The
quadratic parameter was drawn from a prior distribution, unknown to the
observers. We tested whether human observers take full advantage of the given
information, including the likelihood function of the observed points and the
prior distribution of the quadratic parameter. We compared human performance
with Bayesian regression, which is the (Bayes) optimal solution to this
problem, and three sub-optimal models, namely maximum likelihood regression,
prior regression and maximum a posteriori regression, which are simpler to
compute. Our results clearly show that humans use Bayesian regression. We
further investigated several variants of Bayesian regression models depending
on how the generative noise is treated and found that participants act in line
with the more sophisticated version.
| [
{
"created": "Tue, 19 Mar 2019 09:01:46 GMT",
"version": "v1"
}
] | 2020-07-01 | [
[
"Jegminat",
"Jannes",
""
],
[
"Jastrzebowska",
"Maya",
""
],
[
"Pachai",
"Matt",
""
],
[
"Herzog",
"Michael",
""
],
[
"Pfister",
"Jean-Pascal",
""
]
] | The human brain copes with sensory uncertainty in accordance with Bayes' rule. However, it is unknown how the brain makes predictions in the presence of parameter uncertainty. Here, we tested whether and how humans take parameter uncertainty into account in a regression task. Participants extrapolated a parabola from a limited number of noisy points, shown on a computer screen. The quadratic parameter was drawn from a prior distribution, unknown to the observers. We tested whether human observers take full advantage of the given information, including the likelihood function of the observed points and the prior distribution of the quadratic parameter. We compared human performance with Bayesian regression, which is the (Bayes) optimal solution to this problem, and three sub-optimal models, namely maximum likelihood regression, prior regression and maximum a posteriori regression, which are simpler to compute. Our results clearly show that humans use Bayesian regression. We further investigated several variants of Bayesian regression models depending on how the generative noise is treated and found that participants act in line with the more sophisticated version. |
1701.04315 | Shlomi Reuveni | Tal Robin, Shlomi Reuveni and Michael Urbakh | Single-molecule theory of enzymatic inhibition predicts the emergence of
inhibitor-activator duality | null | Nature communications, 9(1), pp.1-9, 2018 | 10.1038/s41467-018-02995-6 | null | q-bio.QM cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The classical theory of enzymatic inhibition aims to quantitatively describe
the effect of certain molecules -- called inhibitors -- on the progression of
enzymatic reactions, but growing signs indicate that it must be revised to keep
pace with the single-molecule revolution that is sweeping through the sciences.
Here, we take the single enzyme perspective and rebuild the theory of enzymatic
inhibition from the bottom up. We find that accounting for multi-conformational
enzyme structure and intrinsic randomness cannot undermine the validity of
classical results in the case of competitive inhibition; but that it should
strongly change our view on the uncompetitive and mixed modes of inhibition.
There, stochastic fluctuations on the single-enzyme level could give rise to
inhibitor-activator duality -- a phenomenon in which, under some conditions,
the introduction of a molecule whose binding shuts down enzymatic catalysis
will counter intuitively work to facilitate product formation. We state -- in
terms of experimentally measurable quantities -- a mathematical condition for
the emergence of inhibitor-activator duality, and propose that it could explain
why certain molecules that act as inhibitors when substrate concentrations are
high elicit a non-monotonic dose response when substrate concentrations are
low. The fundamental and practical implications of our findings are thoroughly
discussed.
| [
{
"created": "Wed, 14 Dec 2016 14:51:42 GMT",
"version": "v1"
},
{
"created": "Sat, 21 Oct 2017 13:52:56 GMT",
"version": "v2"
}
] | 2020-10-27 | [
[
"Robin",
"Tal",
""
],
[
"Reuveni",
"Shlomi",
""
],
[
"Urbakh",
"Michael",
""
]
] | The classical theory of enzymatic inhibition aims to quantitatively describe the effect of certain molecules -- called inhibitors -- on the progression of enzymatic reactions, but growing signs indicate that it must be revised to keep pace with the single-molecule revolution that is sweeping through the sciences. Here, we take the single enzyme perspective and rebuild the theory of enzymatic inhibition from the bottom up. We find that accounting for multi-conformational enzyme structure and intrinsic randomness cannot undermine the validity of classical results in the case of competitive inhibition; but that it should strongly change our view on the uncompetitive and mixed modes of inhibition. There, stochastic fluctuations on the single-enzyme level could give rise to inhibitor-activator duality -- a phenomenon in which, under some conditions, the introduction of a molecule whose binding shuts down enzymatic catalysis will counter intuitively work to facilitate product formation. We state -- in terms of experimentally measurable quantities -- a mathematical condition for the emergence of inhibitor-activator duality, and propose that it could explain why certain molecules that act as inhibitors when substrate concentrations are high elicit a non-monotonic dose response when substrate concentrations are low. The fundamental and practical implications of our findings are thoroughly discussed. |
1703.10307 | Takashi Okada | Takashi Okada and Atsushi Mochizuki | Sensitivity and Network Topology in Chemical Reaction Systems | 14 pages, 13 figures | Phys. Rev. E 96, 022322 (2017) | 10.1103/PhysRevE.96.022322 | null | q-bio.MN physics.bio-ph q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In living cells, biochemical reactions are catalyzed by specific enzymes and
connect to one another by sharing substrates and products, forming complex
networks. In our previous studies, we established a framework determining the
responses to enzyme perturbations only from network topology, and then proved a
theorem, called the law of localization, explaining response patterns in terms
of network topology. In this paper, we generalize these results to reaction
networks with conserved concentrations, which allows us to study any reaction
systems. We also propose novel network characteristics quantifying robustness.
We compare E. coli metabolic network with randomly rewired networks, and find
that the robustness of the E. coli network is significantly higher than that of
the random networks.
| [
{
"created": "Thu, 30 Mar 2017 04:13:20 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Sep 2017 01:37:00 GMT",
"version": "v2"
}
] | 2017-09-06 | [
[
"Okada",
"Takashi",
""
],
[
"Mochizuki",
"Atsushi",
""
]
] | In living cells, biochemical reactions are catalyzed by specific enzymes and connect to one another by sharing substrates and products, forming complex networks. In our previous studies, we established a framework determining the responses to enzyme perturbations only from network topology, and then proved a theorem, called the law of localization, explaining response patterns in terms of network topology. In this paper, we generalize these results to reaction networks with conserved concentrations, which allows us to study any reaction systems. We also propose novel network characteristics quantifying robustness. We compare E. coli metabolic network with randomly rewired networks, and find that the robustness of the E. coli network is significantly higher than that of the random networks. |
2102.09409 | Alberto P\'erez-Cervera | Gregory Dumont, Alberto P\'erez-Cervera, Boris Gutkin | Adjoint Method for Macroscopic Phase-Resetting Curves of Generic Spiking
Neural Networks | null | null | 10.1371/journal.pcbi.1010363 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Brain rhythms emerge as a result of synchronization among interconnected
spiking neurons. Key properties of such rhythms can be gleaned from the
phase-resetting curve (PRC). Inferring the macroscopic PRC and developing a
systematic phase reduction theory for emerging rhythms remains an outstanding
theoretical challenge. Here we present a practical theoretical framework to
compute the PRC of generic spiking networks with emergent collective
oscillations. To do so, we adopt a refractory density approach where neurons
are described by the time since their last action potential. In the
thermodynamic limit, the network dynamics are captured by a continuity equation
known as the refractory density equation. We develop an appropriate adjoint
method for this equation which in turn gives a semi-analytical expression of
the infinitesimal PRC. We confirm the validity of our framework for specific
examples of neural networks. Our theoretical findings highlight the
relationship between key biological properties at the individual neuron scale
and the macroscopic oscillatory properties assessed by the PRC.
| [
{
"created": "Thu, 18 Feb 2021 14:53:42 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Feb 2021 15:18:46 GMT",
"version": "v2"
}
] | 2022-10-12 | [
[
"Dumont",
"Gregory",
""
],
[
"Pérez-Cervera",
"Alberto",
""
],
[
"Gutkin",
"Boris",
""
]
] | Brain rhythms emerge as a result of synchronization among interconnected spiking neurons. Key properties of such rhythms can be gleaned from the phase-resetting curve (PRC). Inferring the macroscopic PRC and developing a systematic phase reduction theory for emerging rhythms remains an outstanding theoretical challenge. Here we present a practical theoretical framework to compute the PRC of generic spiking networks with emergent collective oscillations. To do so, we adopt a refractory density approach where neurons are described by the time since their last action potential. In the thermodynamic limit, the network dynamics are captured by a continuity equation known as the refractory density equation. We develop an appropriate adjoint method for this equation which in turn gives a semi-analytical expression of the infinitesimal PRC. We confirm the validity of our framework for specific examples of neural networks. Our theoretical findings highlight the relationship between key biological properties at the individual neuron scale and the macroscopic oscillatory properties assessed by the PRC. |
1808.09052 | Marco Ramaioli | Marco Marconati, Felipe Lopez, Catherine Tuleu, Mine Orlu, Marco
Ramaioli | In vitro and sensory tests to design easy-to-swallow multi-particulate
formulations | null | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Flexible dosing and ease of swallowing are key factors when designing oral
drug delivery systems for paediatric and geriatric populations.
Multi-particulate oral dosage forms can offer significant benefits over
conventional capsules and tablets. This study proposes the use of an in vitro
model to quantitatively investigate the swallowing dynamics in presence of
multi-particulates. In vitro results were compared against sensory tests that
considered the attributes of ease of swallowing and post-swallow residues.
Water and hydrocolloids were considered as suspending vehicles, while the
suspended phase consisted of cellulose pellets of two different average sizes.
Both in vivo and in vitro tests reported easier swallow for smaller
multi-particulates. Besides, water thin liquids appeared not optimal for
complete oral clearance of the solids. The sensory study did not highlight
significant differences between the levels of thickness of the hydrocolloids.
Conversely, more discriminant results were obtained from in vitro tests,
suggesting that a minimum critical viscosity is necessary to enable a smooth
swallow, but increasing too much the carrier concentration affects swallowing
negatively. These results highlight the important interplay of particle size
and suspending vehicle rheology and the meaningful contribution that in vitro
methods can provide to pre-screening multi-particulate oral drug delivery
systems before sensory evaluation.
| [
{
"created": "Mon, 27 Aug 2018 22:34:55 GMT",
"version": "v1"
}
] | 2018-08-29 | [
[
"Marconati",
"Marco",
""
],
[
"Lopez",
"Felipe",
""
],
[
"Tuleu",
"Catherine",
""
],
[
"Orlu",
"Mine",
""
],
[
"Ramaioli",
"Marco",
""
]
] | Flexible dosing and ease of swallowing are key factors when designing oral drug delivery systems for paediatric and geriatric populations. Multi-particulate oral dosage forms can offer significant benefits over conventional capsules and tablets. This study proposes the use of an in vitro model to quantitatively investigate the swallowing dynamics in presence of multi-particulates. In vitro results were compared against sensory tests that considered the attributes of ease of swallowing and post-swallow residues. Water and hydrocolloids were considered as suspending vehicles, while the suspended phase consisted of cellulose pellets of two different average sizes. Both in vivo and in vitro tests reported easier swallow for smaller multi-particulates. Besides, water thin liquids appeared not optimal for complete oral clearance of the solids. The sensory study did not highlight significant differences between the levels of thickness of the hydrocolloids. Conversely, more discriminant results were obtained from in vitro tests, suggesting that a minimum critical viscosity is necessary to enable a smooth swallow, but increasing too much the carrier concentration affects swallowing negatively. These results highlight the important interplay of particle size and suspending vehicle rheology and the meaningful contribution that in vitro methods can provide to pre-screening multi-particulate oral drug delivery systems before sensory evaluation. |
2111.04943 | Mi Jin Lee | Mi Jin Lee and Deok-Sun Lee | Heterogeneous popularity of metabolic reactions from evolution | Main: 5 pages, 4 figures, Supplemental Material: 4 pages, 6 figures | Physical Review Letters 132, 018401 (2024) | 10.1103/PhysRevLett.132.018401 | null | q-bio.PE physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The composition of cellular metabolism is different across species. Empirical
data reveal that bacterial species contain similar numbers of metabolic
reactions but that the cross-species popularity of reactions is so heterogenous
that some reactions are found in all the species while others are in just few
species, characterized by a power-law distribution with the exponent one.
Introducing an evolutionary model concretizing the stochastic recruitment of
chemical reactions into the metabolism of different species at different times
and their inheritance to descendants, we demonstrate that the exponential
growth of the number of species containing a reaction and the saturated
recruitment rate of brand-new reactions lead to the empirically identified
power-law popularity distribution. Furthermore, the structural characteristics
of metabolic networks and the species' phylogeny in our simulations agree well
with empirical observations.
| [
{
"created": "Tue, 9 Nov 2021 03:52:29 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Oct 2023 14:36:26 GMT",
"version": "v2"
},
{
"created": "Fri, 5 Jan 2024 05:10:56 GMT",
"version": "v3"
}
] | 2024-01-08 | [
[
"Lee",
"Mi Jin",
""
],
[
"Lee",
"Deok-Sun",
""
]
] | The composition of cellular metabolism is different across species. Empirical data reveal that bacterial species contain similar numbers of metabolic reactions but that the cross-species popularity of reactions is so heterogenous that some reactions are found in all the species while others are in just few species, characterized by a power-law distribution with the exponent one. Introducing an evolutionary model concretizing the stochastic recruitment of chemical reactions into the metabolism of different species at different times and their inheritance to descendants, we demonstrate that the exponential growth of the number of species containing a reaction and the saturated recruitment rate of brand-new reactions lead to the empirically identified power-law popularity distribution. Furthermore, the structural characteristics of metabolic networks and the species' phylogeny in our simulations agree well with empirical observations. |
1805.07098 | Yutaka Hori | Yuta Sakurai, Yutaka Hori | Bounding Transient Moments of Stochastic Chemical Reactions | null | IEEE Control Systems Letters, vol. 3, No. 2, pp. 290-295, 2019 | 10.1109/LCSYS.2018.2869639 | null | q-bio.QM cs.SY q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The predictive ability of stochastic chemical reactions is currently limited
by the lack of closed form solutions to the governing chemical master equation.
To overcome this limitation, this paper proposes a computational method capable
of predicting mathematically rigorous upper and lower bounds of transient
moments for reactions governed by the law of mass action. We first derive an
equation that transient moments must satisfy based on the moment equation.
Although this equation is underdetermined, we introduce a set of semidefinite
constraints known as moment condition to narrow the feasible set of the
variables in the equation. Using these conditions, we formulate a semidefinite
program that efficiently and rigorously computes the bounds of transient moment
dynamics. The proposed method is demonstrated with illustrative numerical
examples and is compared with related works to discuss advantages and
limitations.
| [
{
"created": "Fri, 18 May 2018 08:47:42 GMT",
"version": "v1"
},
{
"created": "Tue, 22 May 2018 15:46:25 GMT",
"version": "v2"
},
{
"created": "Mon, 6 Aug 2018 05:13:29 GMT",
"version": "v3"
},
{
"created": "Sun, 6 Jan 2019 01:47:13 GMT",
"version": "v4"
}
] | 2019-01-08 | [
[
"Sakurai",
"Yuta",
""
],
[
"Hori",
"Yutaka",
""
]
] | The predictive ability of stochastic chemical reactions is currently limited by the lack of closed form solutions to the governing chemical master equation. To overcome this limitation, this paper proposes a computational method capable of predicting mathematically rigorous upper and lower bounds of transient moments for reactions governed by the law of mass action. We first derive an equation that transient moments must satisfy based on the moment equation. Although this equation is underdetermined, we introduce a set of semidefinite constraints known as moment condition to narrow the feasible set of the variables in the equation. Using these conditions, we formulate a semidefinite program that efficiently and rigorously computes the bounds of transient moment dynamics. The proposed method is demonstrated with illustrative numerical examples and is compared with related works to discuss advantages and limitations. |
2006.07879 | Moeez Subhani | Moeez M. Subhani, Ashiq Anjum | Multiclass Disease Predictions Based on Integrated Clinical and Genomics
Datasets | null | In Poceedings of The Eleventh International Conference on
Bioinformatics, Biocomputational Systems and Biotechnologies. Athens. 2019.
IARA: Wilmington, pp. 20-27 | null | null | q-bio.GN cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Clinical predictions using clinical data by computational methods are common
in bioinformatics. However, clinical predictions using information from
genomics datasets as well is not a frequently observed phenomenon in research.
Precision medicine research requires information from all available datasets to
provide intelligent clinical solutions. In this paper, we have attempted to
create a prediction model which uses information from both clinical and
genomics datasets. We have demonstrated multiclass disease predictions based on
combined clinical and genomics datasets using machine learning methods. We have
created an integrated dataset, using a clinical (ClinVar) and a genomics (gene
expression) dataset, and trained it using instance-based learner to predict
clinical diseases. We have used an innovative but simple way for multiclass
classification, where the number of output classes is as high as 75. We have
used Principal Component Analysis for feature selection. The classifier
predicted diseases with 73\% accuracy on the integrated dataset. The results
were consistent and competent when compared with other classification models.
The results show that genomics information can be reliably included in datasets
for clinical predictions and it can prove to be valuable in clinical
diagnostics and precision medicine.
| [
{
"created": "Sun, 14 Jun 2020 12:23:49 GMT",
"version": "v1"
}
] | 2021-06-25 | [
[
"Subhani",
"Moeez M.",
""
],
[
"Anjum",
"Ashiq",
""
]
] | Clinical predictions using clinical data by computational methods are common in bioinformatics. However, clinical predictions using information from genomics datasets as well is not a frequently observed phenomenon in research. Precision medicine research requires information from all available datasets to provide intelligent clinical solutions. In this paper, we have attempted to create a prediction model which uses information from both clinical and genomics datasets. We have demonstrated multiclass disease predictions based on combined clinical and genomics datasets using machine learning methods. We have created an integrated dataset, using a clinical (ClinVar) and a genomics (gene expression) dataset, and trained it using instance-based learner to predict clinical diseases. We have used an innovative but simple way for multiclass classification, where the number of output classes is as high as 75. We have used Principal Component Analysis for feature selection. The classifier predicted diseases with 73\% accuracy on the integrated dataset. The results were consistent and competent when compared with other classification models. The results show that genomics information can be reliably included in datasets for clinical predictions and it can prove to be valuable in clinical diagnostics and precision medicine. |
2010.03048 | Arbel Harpak | Arbel Harpak and Molly Przeworski | The evolution of group differences in changing environments | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | The selection pressures that have shaped the evolution of complex traits in
humans remain largely unknown, and in some contexts highly contentious, perhaps
above all where they concern mean trait differences among groups. To date, the
discussion has focused on whether such group differences have any genetic
basis, and if so, whether they are without fitness consequences and arose via
random genetic drift, or whether they were driven by selection for different
trait optima in different environments. Here, we highlight a plausible
alternative, that many complex traits evolve under stabilizing selection in the
face of shifting environmental effects. Under this scenario, there will be
rapid evolution at the loci that contribute to trait variation, even when the
trait optimum remains the same. These considerations underscore the strong
assumptions about environmental effects that are required in ascribing trait
differences among groups to genetic differences.
| [
{
"created": "Tue, 6 Oct 2020 21:40:21 GMT",
"version": "v1"
}
] | 2020-10-08 | [
[
"Harpak",
"Arbel",
""
],
[
"Przeworski",
"Molly",
""
]
] | The selection pressures that have shaped the evolution of complex traits in humans remain largely unknown, and in some contexts highly contentious, perhaps above all where they concern mean trait differences among groups. To date, the discussion has focused on whether such group differences have any genetic basis, and if so, whether they are without fitness consequences and arose via random genetic drift, or whether they were driven by selection for different trait optima in different environments. Here, we highlight a plausible alternative, that many complex traits evolve under stabilizing selection in the face of shifting environmental effects. Under this scenario, there will be rapid evolution at the loci that contribute to trait variation, even when the trait optimum remains the same. These considerations underscore the strong assumptions about environmental effects that are required in ascribing trait differences among groups to genetic differences. |
1102.4904 | Bhaskar DasGupta | Bhaskar DasGupta and Paola Vera-Licona and Eduardo Sontag | Reverse Engineering of Molecular Networks from a Common Combinatorial
Approach | 15 pages; in Algorithms in Computational Molecular Biology:
Techniques, Approaches and Applications, M. Elloumi and A. Zomaya (editors),
John Wiley & Sons, Inc., January 2011 | null | null | null | q-bio.MN cs.CE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The understanding of molecular cell biology requires insight into the
structure and dynamics of networks that are made up of thousands of interacting
molecules of DNA, RNA, proteins, metabolites, and other components. One of the
central goals of systems biology is the unraveling of the as yet poorly
characterized complex web of interactions among these components. This work is
made harder by the fact that new species and interactions are continuously
discovered in experimental work, necessitating the development of adaptive and
fast algorithms for network construction and updating. Thus, the
"reverse-engineering" of networks from data has emerged as one of the central
concern of systems biology research.
A variety of reverse-engineering methods have been developed, based on tools
from statistics, machine learning, and other mathematical domains. In order to
effectively use these methods, it is essential to develop an understanding of
the fundamental characteristics of these algorithms. With that in mind, this
chapter is dedicated to the reverse-engineering of biological systems.
Specifically, we focus our attention on a particular class of methods for
reverse-engineering, namely those that rely algorithmically upon the so-called
"hitting-set" problem, which is a classical combinatorial and computer science
problem, Each of these methods utilizes a different algorithm in order to
obtain an exact or an approximate solution of the hitting set problem. We will
explore the ultimate impact that the alternative algorithms have on the
inference of published in silico biological networks.
| [
{
"created": "Thu, 24 Feb 2011 04:56:37 GMT",
"version": "v1"
}
] | 2011-02-25 | [
[
"DasGupta",
"Bhaskar",
""
],
[
"Vera-Licona",
"Paola",
""
],
[
"Sontag",
"Eduardo",
""
]
] | The understanding of molecular cell biology requires insight into the structure and dynamics of networks that are made up of thousands of interacting molecules of DNA, RNA, proteins, metabolites, and other components. One of the central goals of systems biology is the unraveling of the as yet poorly characterized complex web of interactions among these components. This work is made harder by the fact that new species and interactions are continuously discovered in experimental work, necessitating the development of adaptive and fast algorithms for network construction and updating. Thus, the "reverse-engineering" of networks from data has emerged as one of the central concern of systems biology research. A variety of reverse-engineering methods have been developed, based on tools from statistics, machine learning, and other mathematical domains. In order to effectively use these methods, it is essential to develop an understanding of the fundamental characteristics of these algorithms. With that in mind, this chapter is dedicated to the reverse-engineering of biological systems. Specifically, we focus our attention on a particular class of methods for reverse-engineering, namely those that rely algorithmically upon the so-called "hitting-set" problem, which is a classical combinatorial and computer science problem, Each of these methods utilizes a different algorithm in order to obtain an exact or an approximate solution of the hitting set problem. We will explore the ultimate impact that the alternative algorithms have on the inference of published in silico biological networks. |
2107.06281 | Islem Rekik | Islem Mhiri and Ahmed Nebli and Mohamed Ali Mahjoub and Islem Rekik | Non-isomorphic Inter-modality Graph Alignment and Synthesis for Holistic
Brain Mapping | null | null | null | null | q-bio.NC cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Brain graph synthesis marked a new era for predicting a target brain graph
from a source one without incurring the high acquisition cost and processing
time of neuroimaging data. However, existing multi-modal graph synthesis
frameworks have several limitations. First, they mainly focus on generating
graphs from the same domain (intra-modality), overlooking the rich multimodal
representations of brain connectivity (inter-modality). Second, they can only
handle isomorphic graph generation tasks, limiting their generalizability to
synthesizing target graphs with a different node size and topological structure
from those of the source one. More importantly, both target and source domains
might have different distributions, which causes a domain fracture between them
(i.e., distribution misalignment). To address such challenges, we propose an
inter-modality aligner of non-isomorphic graphs (IMANGraphNet) framework to
infer a target graph modality based on a given modality. Our three core
contributions lie in (i) predicting a target graph (e.g., functional) from a
source graph (e.g., morphological) based on a novel graph generative
adversarial network (gGAN); (ii) using non-isomorphic graphs for both source
and target domains with a different number of nodes, edges and structure; and
(iii) enforcing the predicted target distribution to match that of the ground
truth graphs using a graph autoencoder to relax the designed loss oprimization.
To handle the unstable behavior of gGAN, we design a new Ground
Truth-Preserving (GT-P) loss function to guide the generator in learning the
topological structure of ground truth brain graphs. Our comprehensive
experiments on predicting functional from morphological graphs demonstrate the
outperformance of IMANGraphNet in comparison with its variants. This can be
further leveraged for integrative and holistic brain mapping in health and
disease.
| [
{
"created": "Wed, 30 Jun 2021 08:59:55 GMT",
"version": "v1"
}
] | 2021-07-15 | [
[
"Mhiri",
"Islem",
""
],
[
"Nebli",
"Ahmed",
""
],
[
"Mahjoub",
"Mohamed Ali",
""
],
[
"Rekik",
"Islem",
""
]
] | Brain graph synthesis marked a new era for predicting a target brain graph from a source one without incurring the high acquisition cost and processing time of neuroimaging data. However, existing multi-modal graph synthesis frameworks have several limitations. First, they mainly focus on generating graphs from the same domain (intra-modality), overlooking the rich multimodal representations of brain connectivity (inter-modality). Second, they can only handle isomorphic graph generation tasks, limiting their generalizability to synthesizing target graphs with a different node size and topological structure from those of the source one. More importantly, both target and source domains might have different distributions, which causes a domain fracture between them (i.e., distribution misalignment). To address such challenges, we propose an inter-modality aligner of non-isomorphic graphs (IMANGraphNet) framework to infer a target graph modality based on a given modality. Our three core contributions lie in (i) predicting a target graph (e.g., functional) from a source graph (e.g., morphological) based on a novel graph generative adversarial network (gGAN); (ii) using non-isomorphic graphs for both source and target domains with a different number of nodes, edges and structure; and (iii) enforcing the predicted target distribution to match that of the ground truth graphs using a graph autoencoder to relax the designed loss oprimization. To handle the unstable behavior of gGAN, we design a new Ground Truth-Preserving (GT-P) loss function to guide the generator in learning the topological structure of ground truth brain graphs. Our comprehensive experiments on predicting functional from morphological graphs demonstrate the outperformance of IMANGraphNet in comparison with its variants. This can be further leveraged for integrative and holistic brain mapping in health and disease. |
1902.06250 | Anna M. Hagenston Hertle | Anna M. Hagenston, Sara Ben Ayed, Hilmar Bading | Afferent Fiber Activity-Induced Cytoplasmic Calcium Signaling in
Parvalbumin-Positive Inhibitory Interneurons of the Spinal Cord Dorsal Horn | 15 pages including 1 figure | null | null | null | q-bio.NC q-bio.TO | http://creativecommons.org/publicdomain/zero/1.0/ | Neuronal calcium (Ca2+) signaling represents a molecular trigger for diverse
central nervous system adaptations and maladaptions. The altered function of
dorsal spinal inhibitory interneurons is strongly implicated in the mechanisms
underlying central sensitization in chronic pain. Surprisingly little is known,
however, about the characteristics and consequences of Ca2+ signaling in these
cells, including whether and how they are changed following a peripheral insult
or injury and how such alterations might influence maladaptive pain plasticity.
As a first step towards clarifying the precise role of Ca2+ signaling in dorsal
spinal inhibitory neurons for central sensitization, we established methods for
characterizing Ca2+ signals in genetically defined populations of these cells.
In particular, we employed recombinant adeno-associated viral vectors to
deliver subcellularly targeted, genetically encoded Ca2+ indicators into
parvalbumin-positive spinal inhibitory neurons. Using wide-field microscopy, we
observed both spontaneous and afferent fiber activity triggered Ca2+ signals in
these cells. We propose that these methods may be adapted in future studies for
the precise characterization and manipulation of Ca2+ signaling in diverse
spinal inhibitory neuron subtypes, thereby enabling the clarification of its
role in the mechanisms underlying pain chronicity and opening the door for
possibly novel treatment directions.
| [
{
"created": "Sun, 17 Feb 2019 12:32:50 GMT",
"version": "v1"
}
] | 2019-02-19 | [
[
"Hagenston",
"Anna M.",
""
],
[
"Ayed",
"Sara Ben",
""
],
[
"Bading",
"Hilmar",
""
]
] | Neuronal calcium (Ca2+) signaling represents a molecular trigger for diverse central nervous system adaptations and maladaptions. The altered function of dorsal spinal inhibitory interneurons is strongly implicated in the mechanisms underlying central sensitization in chronic pain. Surprisingly little is known, however, about the characteristics and consequences of Ca2+ signaling in these cells, including whether and how they are changed following a peripheral insult or injury and how such alterations might influence maladaptive pain plasticity. As a first step towards clarifying the precise role of Ca2+ signaling in dorsal spinal inhibitory neurons for central sensitization, we established methods for characterizing Ca2+ signals in genetically defined populations of these cells. In particular, we employed recombinant adeno-associated viral vectors to deliver subcellularly targeted, genetically encoded Ca2+ indicators into parvalbumin-positive spinal inhibitory neurons. Using wide-field microscopy, we observed both spontaneous and afferent fiber activity triggered Ca2+ signals in these cells. We propose that these methods may be adapted in future studies for the precise characterization and manipulation of Ca2+ signaling in diverse spinal inhibitory neuron subtypes, thereby enabling the clarification of its role in the mechanisms underlying pain chronicity and opening the door for possibly novel treatment directions. |
2004.06029 | Harsh Vashistha | Harsh Vashistha, Maryam Kohram and Hanna Salman | Non-genetic inheritance restraint of cell-to-cell variation | null | null | 10.7554/eLife.64779 | null | q-bio.CB | http://creativecommons.org/licenses/by/4.0/ | Heterogeneity in physical and functional characteristics of cells (e.g. size,
cycle time, growth rate, protein concentration) proliferates within an isogenic
population due to stochasticity in intracellular biochemical processes and in
the distribution of resources during divisions. Conversely, it is limited in
part by the inheritance of cellular components between consecutive generations.
Here we introduce a new experimental method for measuring proliferation of
heterogeneity in bacterial cell characteristics, based on measuring how two
sister cells become different from each other over time. Our measurements
provide the inheritance dynamics of different cellular properties, and the
"inertia" of cells to maintain these properties along time. We find that
inheritance dynamics are property-specific, and can exhibit long-term memory
(~10 generations) that works to restrain variation among cells. Our results can
reveal mechanisms of non-genetic inheritance in bacteria and help understand
how cells control their properties and heterogeneity within isogenic cell
populations.
| [
{
"created": "Mon, 13 Apr 2020 16:01:08 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Aug 2020 19:18:42 GMT",
"version": "v2"
},
{
"created": "Wed, 20 Jan 2021 15:21:04 GMT",
"version": "v3"
}
] | 2021-02-02 | [
[
"Vashistha",
"Harsh",
""
],
[
"Kohram",
"Maryam",
""
],
[
"Salman",
"Hanna",
""
]
] | Heterogeneity in physical and functional characteristics of cells (e.g. size, cycle time, growth rate, protein concentration) proliferates within an isogenic population due to stochasticity in intracellular biochemical processes and in the distribution of resources during divisions. Conversely, it is limited in part by the inheritance of cellular components between consecutive generations. Here we introduce a new experimental method for measuring proliferation of heterogeneity in bacterial cell characteristics, based on measuring how two sister cells become different from each other over time. Our measurements provide the inheritance dynamics of different cellular properties, and the "inertia" of cells to maintain these properties along time. We find that inheritance dynamics are property-specific, and can exhibit long-term memory (~10 generations) that works to restrain variation among cells. Our results can reveal mechanisms of non-genetic inheritance in bacteria and help understand how cells control their properties and heterogeneity within isogenic cell populations. |
1007.0942 | Pascal Ferraro | Julien Allali (LaBRI), C\'edric Chauve, Pascal Ferraro (LaBRI, PIMS),
Anne-Laure Gaillard (LaBRI) | Efficient chaining of seeds in ordered trees | null | 21st International Workshop on Combinatorial Algorithms, London :
United Kingdom (2010) | 10.1007/978-3-642-19222-7_27 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider here the problem of chaining seeds in ordered trees. Seeds are
mappings between two trees Q and T and a chain is a subset of non overlapping
seeds that is consistent with respect to postfix order and ancestrality. This
problem is a natural extension of a similar problem for sequences, and has
applications in computational biology, such as mining a database of RNA
secondary structures. For the chaining problem with a set of m constant size
seeds, we describe an algorithm with complexity O(m2 log(m)) in time and O(m2)
in space.
| [
{
"created": "Tue, 6 Jul 2010 16:31:11 GMT",
"version": "v1"
}
] | 2015-05-19 | [
[
"Allali",
"Julien",
"",
"LaBRI"
],
[
"Chauve",
"Cédric",
"",
"LaBRI, PIMS"
],
[
"Ferraro",
"Pascal",
"",
"LaBRI, PIMS"
],
[
"Gaillard",
"Anne-Laure",
"",
"LaBRI"
]
] | We consider here the problem of chaining seeds in ordered trees. Seeds are mappings between two trees Q and T and a chain is a subset of non overlapping seeds that is consistent with respect to postfix order and ancestrality. This problem is a natural extension of a similar problem for sequences, and has applications in computational biology, such as mining a database of RNA secondary structures. For the chaining problem with a set of m constant size seeds, we describe an algorithm with complexity O(m2 log(m)) in time and O(m2) in space. |
q-bio/0511008 | Dieter W. Heermann | S. Ritter, J. Odenheimer, D. W. Heermann, F. Bantignies, C. Grimaud,
G. Cavalli | Modelling and simulation of polycomb-dependent chromosomal interactions
in drosophila | null | null | null | null | q-bio.SC | null | The conditions of the chromosomes inside the nucleus in the Rabl
configuration have been modelled as self-avoiding polymer chains under
restraining conditions. To ensure that the chromosomes remain stretched out and
lined up, we fixed their end points to two opposing walls. The numbers of
segments $N$, the distances $d_1$ and $d_2$ between the fixpoints, and the
wall-to-wall distance $z$ (as measured in segment lengths) determine an
approximate value for the Kuhn segment length $k_l$. We have simulated the
movement of the chromosomes using molecular dynamics to obtain the expected
distance distribution between the genetic loci in the absence of further
attractive or repulsive forces. A comparison to biological experiments on
\textit{Drosophila Melanogaster} yields information on the parameters for our
model. With the correct parameters it is possible to draw conclusions on the
strength and range of the attraction that leads to pairing.
| [
{
"created": "Tue, 8 Nov 2005 22:22:06 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Ritter",
"S.",
""
],
[
"Odenheimer",
"J.",
""
],
[
"Heermann",
"D. W.",
""
],
[
"Bantignies",
"F.",
""
],
[
"Grimaud",
"C.",
""
],
[
"Cavalli",
"G.",
""
]
] | The conditions of the chromosomes inside the nucleus in the Rabl configuration have been modelled as self-avoiding polymer chains under restraining conditions. To ensure that the chromosomes remain stretched out and lined up, we fixed their end points to two opposing walls. The numbers of segments $N$, the distances $d_1$ and $d_2$ between the fixpoints, and the wall-to-wall distance $z$ (as measured in segment lengths) determine an approximate value for the Kuhn segment length $k_l$. We have simulated the movement of the chromosomes using molecular dynamics to obtain the expected distance distribution between the genetic loci in the absence of further attractive or repulsive forces. A comparison to biological experiments on \textit{Drosophila Melanogaster} yields information on the parameters for our model. With the correct parameters it is possible to draw conclusions on the strength and range of the attraction that leads to pairing. |
0707.4431 | Mathilde Himgi | Emilie Carr\'e (IMNSSA), Emmanuel Cantais, Olivier Darbin, Jean-Pierre
Terrier, Michel Lonjon, Bruno Palmier, Jean-Jacques Risso | Technical aspects of an impact acceleration traumatic brain injury rat
model with potential suitability for both microdialysis and PtiO2 monitoring | null | Journal of Neuroscience Methods 140, 1-2 (2004) 23-8 | 10.1016/j.jneumeth.2004.04.037 | null | q-bio.NC | null | This report describes technical adaptations of a traumatic brain injury (TBI)
model-largely inspired by Marmarou-in order to monitor microdialysis data and
PtiO2 (brain tissue oxygen) before, during and after injury. We particularly
focalize on our model requirements which allows us to re-create some drastic
pathological characteristics experienced by severely head-injured patients:
impact on a closed skull, no ventilation immediately after impact, presence of
diffuse axonal injuries and secondary brain insults from systemic origin...We
notably give priority to minimize anaesthesia duration in order to tend to
banish any neuroprotection. Our new model will henceforth allow a better
understanding of neurochemical and biochemical alterations resulting from
traumatic brain injury, using microdialysis and PtiO2 techniques already
monitored in our Intensive Care Unit. Studies on efficiency and therapeutic
window of neuroprotective pharmacological molecules are now conceivable to
ameliorate severe head-injury treatment.
| [
{
"created": "Mon, 30 Jul 2007 15:23:50 GMT",
"version": "v1"
}
] | 2007-07-31 | [
[
"Carré",
"Emilie",
"",
"IMNSSA"
],
[
"Cantais",
"Emmanuel",
""
],
[
"Darbin",
"Olivier",
""
],
[
"Terrier",
"Jean-Pierre",
""
],
[
"Lonjon",
"Michel",
""
],
[
"Palmier",
"Bruno",
""
],
[
"Risso",
"Jean-Jacques",
""
]
] | This report describes technical adaptations of a traumatic brain injury (TBI) model-largely inspired by Marmarou-in order to monitor microdialysis data and PtiO2 (brain tissue oxygen) before, during and after injury. We particularly focalize on our model requirements which allows us to re-create some drastic pathological characteristics experienced by severely head-injured patients: impact on a closed skull, no ventilation immediately after impact, presence of diffuse axonal injuries and secondary brain insults from systemic origin...We notably give priority to minimize anaesthesia duration in order to tend to banish any neuroprotection. Our new model will henceforth allow a better understanding of neurochemical and biochemical alterations resulting from traumatic brain injury, using microdialysis and PtiO2 techniques already monitored in our Intensive Care Unit. Studies on efficiency and therapeutic window of neuroprotective pharmacological molecules are now conceivable to ameliorate severe head-injury treatment. |
1807.02155 | Konstantinos Michmizos | Guangzhi Tang, Konstantinos P. Michmizos | Gridbot: An autonomous robot controlled by a Spiking Neural Network
mimicking the brain's navigational system | 8 pages, 3 Figures, International Conference on Neuromorphic Systems
(ICONS 2018) | null | 10.1145/3229884.3229888 | null | q-bio.NC cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is true that the "best" neural network is not necessarily the one with the
most "brain-like" behavior. Understanding biological intelligence, however, is
a fundamental goal for several distinct disciplines. Translating our
understanding of intelligence to machines is a fundamental problem in robotics.
Propelled by new advancements in Neuroscience, we developed a spiking neural
network (SNN) that draws from mounting experimental evidence that a number of
individual neurons is associated with spatial navigation. By following the
brain's structure, our model assumes no initial all-to-all connectivity, which
could inhibit its translation to a neuromorphic hardware, and learns an
uncharted territory by mapping its identified components into a limited number
of neural representations, through spike-timing dependent plasticity (STDP). In
our ongoing effort to employ a bioinspired SNN-controlled robot to real-world
spatial mapping applications, we demonstrate here how an SNN may robustly
control an autonomous robot in mapping and exploring an unknown environment,
while compensating for its own intrinsic hardware imperfections, such as
partial or total loss of visual input.
| [
{
"created": "Thu, 5 Jul 2018 19:09:45 GMT",
"version": "v1"
}
] | 2018-07-09 | [
[
"Tang",
"Guangzhi",
""
],
[
"Michmizos",
"Konstantinos P.",
""
]
] | It is true that the "best" neural network is not necessarily the one with the most "brain-like" behavior. Understanding biological intelligence, however, is a fundamental goal for several distinct disciplines. Translating our understanding of intelligence to machines is a fundamental problem in robotics. Propelled by new advancements in Neuroscience, we developed a spiking neural network (SNN) that draws from mounting experimental evidence that a number of individual neurons is associated with spatial navigation. By following the brain's structure, our model assumes no initial all-to-all connectivity, which could inhibit its translation to a neuromorphic hardware, and learns an uncharted territory by mapping its identified components into a limited number of neural representations, through spike-timing dependent plasticity (STDP). In our ongoing effort to employ a bioinspired SNN-controlled robot to real-world spatial mapping applications, we demonstrate here how an SNN may robustly control an autonomous robot in mapping and exploring an unknown environment, while compensating for its own intrinsic hardware imperfections, such as partial or total loss of visual input. |
2403.19011 | Alan Kaplan | Alan D. Kaplan, Priyadip Ray, John D. Greene, Vincent X. Liu | Sequential Inference of Hospitalization Electronic Health Records Using
Probabilistic Models | null | null | null | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by/4.0/ | In the dynamic hospital setting, decision support can be a valuable tool for
improving patient outcomes. Data-driven inference of future outcomes is
challenging in this dynamic setting, where long sequences such as laboratory
tests and medications are updated frequently. This is due in part to
heterogeneity of data types and mixed-sequence types contained in variable
length sequences. In this work we design a probabilistic unsupervised model for
multiple arbitrary-length sequences contained in hospitalization Electronic
Health Record (EHR) data. The model uses a latent variable structure and
captures complex relationships between medications, diagnoses, laboratory
tests, neurological assessments, and medications. It can be trained on original
data, without requiring any lossy transformations or time binning. Inference
algorithms are derived that use partial data to infer properties of the
complete sequences, including their length and presence of specific values. We
train this model on data from subjects receiving medical care in the Kaiser
Permanente Northern California integrated healthcare delivery system. The
results are evaluated against held-out data for predicting the length of
sequences and presence of Intensive Care Unit (ICU) in hospitalization bed
sequences. Our method outperforms a baseline approach, showing that in these
experiments the trained model captures information in the sequences that is
informative of their future values.
| [
{
"created": "Wed, 27 Mar 2024 21:06:26 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Apr 2024 15:06:26 GMT",
"version": "v2"
}
] | 2024-04-25 | [
[
"Kaplan",
"Alan D.",
""
],
[
"Ray",
"Priyadip",
""
],
[
"Greene",
"John D.",
""
],
[
"Liu",
"Vincent X.",
""
]
] | In the dynamic hospital setting, decision support can be a valuable tool for improving patient outcomes. Data-driven inference of future outcomes is challenging in this dynamic setting, where long sequences such as laboratory tests and medications are updated frequently. This is due in part to heterogeneity of data types and mixed-sequence types contained in variable length sequences. In this work we design a probabilistic unsupervised model for multiple arbitrary-length sequences contained in hospitalization Electronic Health Record (EHR) data. The model uses a latent variable structure and captures complex relationships between medications, diagnoses, laboratory tests, neurological assessments, and medications. It can be trained on original data, without requiring any lossy transformations or time binning. Inference algorithms are derived that use partial data to infer properties of the complete sequences, including their length and presence of specific values. We train this model on data from subjects receiving medical care in the Kaiser Permanente Northern California integrated healthcare delivery system. The results are evaluated against held-out data for predicting the length of sequences and presence of Intensive Care Unit (ICU) in hospitalization bed sequences. Our method outperforms a baseline approach, showing that in these experiments the trained model captures information in the sequences that is informative of their future values. |
1805.06067 | Arlin Stoltzfus | Arlin Stoltzfus | Understanding bias in the introduction of variation as an evolutionary
cause | Contribution to the 2017 EES workshop on Cause and Process in
Evolution, 11 to 14 May 2017 at the Konrad Lorenz Institute, Klosterneuberg,
Austria
(http://extendedevolutionarysynthesis.com/cause-and-process-in-evolution/).
28 pages, 7 figures | null | null | null | q-bio.PE | http://creativecommons.org/publicdomain/zero/1.0/ | Our understanding of evolution is shaped strongly by how we conceive of its
fundamental causes. In the original Modern Synthesis, evolution was defined as
a process of shifting the frequencies of available alleles at many loci
affecting a trait under selection. Events of mutation that introduce novelty
were not considered evolutionary causes, but proximate causes acting at the
wrong level. Today it is clear that long-term evolutionary dynamics depend on
the dynamics of mutational introduction. Yet, the implications of this
dependency remain unfamiliar, and have not yet penetrated into high-level
debates over evolutionary theory. Modeling the influence of biases in the
introduction process reveals behavior previously unimagined, as well as
behavior previously considered impossible. Quantitative biases in the
introduction of variation can impose biases on the outcome of evolution without
requiring high mutation rates or neutral evolution. Mutation-biased adaptation,
a possibility not previously imagined, has been observed among diverse taxa.
Directional trends are possible under a sustained bias. Biases that are
developmental in origin may have an effect analogous to mutational biases.
Structuralist arguments invoking the relative accessibility of forms in
state-space can be understood as references to the role of biases in the
introduction of variation. That is, the characteristic concerns of molecular
evolution, evo-devo and structuralism can be interpreted to implicate a kind of
causation absent from the original Modern Synthesis.
| [
{
"created": "Tue, 15 May 2018 23:27:23 GMT",
"version": "v1"
}
] | 2018-05-17 | [
[
"Stoltzfus",
"Arlin",
""
]
] | Our understanding of evolution is shaped strongly by how we conceive of its fundamental causes. In the original Modern Synthesis, evolution was defined as a process of shifting the frequencies of available alleles at many loci affecting a trait under selection. Events of mutation that introduce novelty were not considered evolutionary causes, but proximate causes acting at the wrong level. Today it is clear that long-term evolutionary dynamics depend on the dynamics of mutational introduction. Yet, the implications of this dependency remain unfamiliar, and have not yet penetrated into high-level debates over evolutionary theory. Modeling the influence of biases in the introduction process reveals behavior previously unimagined, as well as behavior previously considered impossible. Quantitative biases in the introduction of variation can impose biases on the outcome of evolution without requiring high mutation rates or neutral evolution. Mutation-biased adaptation, a possibility not previously imagined, has been observed among diverse taxa. Directional trends are possible under a sustained bias. Biases that are developmental in origin may have an effect analogous to mutational biases. Structuralist arguments invoking the relative accessibility of forms in state-space can be understood as references to the role of biases in the introduction of variation. That is, the characteristic concerns of molecular evolution, evo-devo and structuralism can be interpreted to implicate a kind of causation absent from the original Modern Synthesis. |
2302.13897 | Maurice HT Ling | Clarence FG Castillo, Zhu En Chay, Maurice HT Ling | Resistance Maintained in Digital Organisms despite
Guanine/Cytosine-Based Fitness Cost and Extended De-Selection: Implications
to Microbial Antibiotics Resistance | null | MOJ Proteomics & Bioinformatics 2(2): 00039 (2015) | null | null | q-bio.PE cs.NE | http://creativecommons.org/licenses/by-sa/4.0/ | Antibiotics resistance has caused much complication in the treatment of
diseases, where the pathogen is no longer susceptible to specific antibiotics
and the use of such antibiotics are no longer effective for treatment. A recent
study that utilizes digital organisms suggests that complete elimination of
specific antibiotic resistance is unlikely after the disuse of antibiotics,
assuming that there are no fitness costs for maintaining resistance once
resistance are established. Fitness cost are referred to as reaction to change
in environment, where organism improves its' abilities in one area at the
expense of the other. Our goal in this study is to use digital organisms to
examine the rate of gain and loss of resistance where fitness costs have
incurred in maintaining resistance. Our results showed that GC-content based
fitness cost during de-selection by removal of antibiotic-induced selective
pressure portrayed similar trends in resistance compared to that of no fitness
cost, at all stages of initial selection, repeated de-selection and
re-introduction of selective pressure. Paired t-test suggested that prolonged
stabilization of resistance after initial loss is not statistically significant
for its difference to that of no fitness cost. This suggests that complete
elimination of specific antibiotics resistance is unlikely after the disuse of
antibiotics despite presence of fitness cost in maintaining antibiotic
resistance during the disuse of antibiotics, once a resistant pool of
micro-organism has been established.
| [
{
"created": "Sun, 19 Feb 2023 11:40:36 GMT",
"version": "v1"
}
] | 2023-02-28 | [
[
"Castillo",
"Clarence FG",
""
],
[
"Chay",
"Zhu En",
""
],
[
"Ling",
"Maurice HT",
""
]
] | Antibiotics resistance has caused much complication in the treatment of diseases, where the pathogen is no longer susceptible to specific antibiotics and the use of such antibiotics are no longer effective for treatment. A recent study that utilizes digital organisms suggests that complete elimination of specific antibiotic resistance is unlikely after the disuse of antibiotics, assuming that there are no fitness costs for maintaining resistance once resistance are established. Fitness cost are referred to as reaction to change in environment, where organism improves its' abilities in one area at the expense of the other. Our goal in this study is to use digital organisms to examine the rate of gain and loss of resistance where fitness costs have incurred in maintaining resistance. Our results showed that GC-content based fitness cost during de-selection by removal of antibiotic-induced selective pressure portrayed similar trends in resistance compared to that of no fitness cost, at all stages of initial selection, repeated de-selection and re-introduction of selective pressure. Paired t-test suggested that prolonged stabilization of resistance after initial loss is not statistically significant for its difference to that of no fitness cost. This suggests that complete elimination of specific antibiotics resistance is unlikely after the disuse of antibiotics despite presence of fitness cost in maintaining antibiotic resistance during the disuse of antibiotics, once a resistant pool of micro-organism has been established. |
0906.2470 | Francesc Rossell\'o | Arnau Mir, Francesc Rossello | The mean value of the squared path-difference distance for rooted
phylogenetic trees | 16 pages | null | null | null | q-bio.PE cs.DM math.CA q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The path-difference metric is one of the oldest distances for the comparison
of fully resolved phylogenetic trees, but its statistical properties are still
quite unknown. In this paper we compute the mean value of the square of the
path-difference metric between two fully resolved rooted phylogenetic trees
with $n$ leaves, under the uniform distribution. This complements previous work
by Steel and Penny, who computed this mean value for fully resolved unrooted
phylogenetic trees.
| [
{
"created": "Sat, 13 Jun 2009 11:33:28 GMT",
"version": "v1"
}
] | 2009-06-16 | [
[
"Mir",
"Arnau",
""
],
[
"Rossello",
"Francesc",
""
]
] | The path-difference metric is one of the oldest distances for the comparison of fully resolved phylogenetic trees, but its statistical properties are still quite unknown. In this paper we compute the mean value of the square of the path-difference metric between two fully resolved rooted phylogenetic trees with $n$ leaves, under the uniform distribution. This complements previous work by Steel and Penny, who computed this mean value for fully resolved unrooted phylogenetic trees. |
2407.11453 | Coralie Fritsch | Athanase Benetos (DCAC), Coralie Fritsch (SIMBA, IECL), Emma Horton,
Lionel Lenotre (IRIMAS, ARCHIMEDE, PASTA), Simon Toupance (DCAC), Denis
Villemonais (SIMBA, IECL, IUF) | Stochastic branching models for the telomeres dynamics in a model
including telomerase activity | null | null | null | null | q-bio.CB math.PR q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Telomeres are repetitive sequences of nucleotides at the end of chromosomes,
whose evolution over time is intrinsically related to biological ageing. In
most cells, with each cell division, telomeres shorten due to the so-called end
replication problem, which can lead to replicative senescence and a variety of
age-related diseases. On the other hand, in certain cells, the presence of the
enzyme telomerase can lead to the lengthening of telomeres, which may delay or
prevent the onset of such diseases but can also increase the risk of cancer.In
this article, we propose a stochastic representation of this biological model,
which takes into account multiple chromosomes per cell, the effect of
telomerase, different cell types and the dependence of the distribution of
telomere length on the dynamics of the process. We study theoretical properties
of this model, including its long-term behaviour. In addition, we investigate
numerically the impact of the model parameters on biologically relevant
quantities, such as the Hayflick limit and the Malthusian parameter of the
population of cells.
| [
{
"created": "Tue, 16 Jul 2024 07:40:07 GMT",
"version": "v1"
}
] | 2024-07-17 | [
[
"Benetos",
"Athanase",
"",
"DCAC"
],
[
"Fritsch",
"Coralie",
"",
"SIMBA, IECL"
],
[
"Horton",
"Emma",
"",
"IRIMAS, ARCHIMEDE, PASTA"
],
[
"Lenotre",
"Lionel",
"",
"IRIMAS, ARCHIMEDE, PASTA"
],
[
"Toupance",
"Simon",
"",
"DCAC"
],
[
"Villemonais",
"Denis",
"",
"SIMBA, IECL, IUF"
]
] | Telomeres are repetitive sequences of nucleotides at the end of chromosomes, whose evolution over time is intrinsically related to biological ageing. In most cells, with each cell division, telomeres shorten due to the so-called end replication problem, which can lead to replicative senescence and a variety of age-related diseases. On the other hand, in certain cells, the presence of the enzyme telomerase can lead to the lengthening of telomeres, which may delay or prevent the onset of such diseases but can also increase the risk of cancer.In this article, we propose a stochastic representation of this biological model, which takes into account multiple chromosomes per cell, the effect of telomerase, different cell types and the dependence of the distribution of telomere length on the dynamics of the process. We study theoretical properties of this model, including its long-term behaviour. In addition, we investigate numerically the impact of the model parameters on biologically relevant quantities, such as the Hayflick limit and the Malthusian parameter of the population of cells. |
2207.13918 | Thomas Slawig | Markus Pfeil, Thomas Slawig | Adaptive Time Step Algorithms for the Simulation of marine Ecosystem
Models using the Transport Matrix Method Implementation Metos3D | null | null | null | null | q-bio.PE physics.ao-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The reduction of the computational effort is desirable for the simulation of
marine ecosystem models. Using a marine ecosystem model, the assessment and the
validation of annual periodic solutions (i.e., steady annual cycles) against
observational data are crucial to identify biogeochemical processes, which, for
example, influence the global carbon cycle. For marine ecosystem models, the
transport matrix method (TMM) already lowers the runtime of the simulation
significantly and enables the application of larger time steps
straightforwardly. However, the selection of an appropriate time step is a
challenging compromise between accuracy and shortening the runtime. Using an
automatic time step adjustment during the computation of a steady annual cycle
with the TMM, we present in this paper different algorithms applying either an
adaptive step size control or decreasing time steps in order to use the time
step always as large as possible without any manual selection. For these
methods and a variety of marine ecosystem models of different complexity, the
accuracy of the computed steady annual cycle achieved the same accuracy as
solutions obtained with a fixed time step. Depending on the complexity of the
marine ecosystem model, the application of the methods shortened the runtime
significantly. Due to the certain overhead of the adaptive method, the
computational effort may be higher in special cases using the adaptive step
size control. The presented methods represent computational efficient methods
for the simulation of marine ecosystem models using the TMM but without any
manual selection of the time step.
| [
{
"created": "Thu, 28 Jul 2022 07:18:43 GMT",
"version": "v1"
}
] | 2022-07-29 | [
[
"Pfeil",
"Markus",
""
],
[
"Slawig",
"Thomas",
""
]
] | The reduction of the computational effort is desirable for the simulation of marine ecosystem models. Using a marine ecosystem model, the assessment and the validation of annual periodic solutions (i.e., steady annual cycles) against observational data are crucial to identify biogeochemical processes, which, for example, influence the global carbon cycle. For marine ecosystem models, the transport matrix method (TMM) already lowers the runtime of the simulation significantly and enables the application of larger time steps straightforwardly. However, the selection of an appropriate time step is a challenging compromise between accuracy and shortening the runtime. Using an automatic time step adjustment during the computation of a steady annual cycle with the TMM, we present in this paper different algorithms applying either an adaptive step size control or decreasing time steps in order to use the time step always as large as possible without any manual selection. For these methods and a variety of marine ecosystem models of different complexity, the accuracy of the computed steady annual cycle achieved the same accuracy as solutions obtained with a fixed time step. Depending on the complexity of the marine ecosystem model, the application of the methods shortened the runtime significantly. Due to the certain overhead of the adaptive method, the computational effort may be higher in special cases using the adaptive step size control. The presented methods represent computational efficient methods for the simulation of marine ecosystem models using the TMM but without any manual selection of the time step. |
q-bio/0408012 | Eduardo D. Sontag | Eduardo D. Sontag and Madalena Chaves | Computation of amplification for systems arising from cellular signaling
pathways | See http://www.math.rutgers.edu/~sontag/ for related papers | null | null | null | q-bio.QM | null | A commonly employed measure of the signal amplification properties of an
input/output system is its induced L2 norm, sometimes also known as "H
infinity" gain. In general, however, it is extremely difficult to compute the
numerical value for this norm, or even to check that it is finite, unless the
system being studied is linear. This paper describes a class of systems for
which it is possible to reduce this computation to that of finding the norm of
an associated linear system. In contrast to linearization approaches, a precise
value, not an estimate, is obtained for the full nonlinear model. The class of
systems that we study arose from the modeling of certain biological
intracellular signaling cascades, but the results should be of wider
applicability.
| [
{
"created": "Mon, 16 Aug 2004 13:39:45 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Sontag",
"Eduardo D.",
""
],
[
"Chaves",
"Madalena",
""
]
] | A commonly employed measure of the signal amplification properties of an input/output system is its induced L2 norm, sometimes also known as "H infinity" gain. In general, however, it is extremely difficult to compute the numerical value for this norm, or even to check that it is finite, unless the system being studied is linear. This paper describes a class of systems for which it is possible to reduce this computation to that of finding the norm of an associated linear system. In contrast to linearization approaches, a precise value, not an estimate, is obtained for the full nonlinear model. The class of systems that we study arose from the modeling of certain biological intracellular signaling cascades, but the results should be of wider applicability. |
0705.4084 | Petter Holme | Petter Holme, Mikael Huss | Comment on "Regularizing capacity of metabolic networks" | null | Phys. Rev. E 77, 023901 (2008) | 10.1103/PhysRevE.77.023901 | null | q-bio.MN | null | In a recent paper, Marr, Muller-Linow and Hutt [Phys. Rev. E 75, 041917
(2007)] investigate an artificial dynamic system on metabolic networks. They
find a less complex time evolution of this dynamic system in real networks,
compared to networks of reference models. The authors argue that this suggests
that metabolic network structure is a major factor behind the stability of
biochemical steady states. We reanalyze the same kind of data using a dynamic
system modeling actual reaction kinetics. The conclusions about stability, from
our analysis, are inconsistent with those of Marr et al. We argue that this
issue calls for a more detailed type of modeling.
| [
{
"created": "Mon, 28 May 2007 18:50:56 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Feb 2008 13:49:46 GMT",
"version": "v2"
}
] | 2008-02-06 | [
[
"Holme",
"Petter",
""
],
[
"Huss",
"Mikael",
""
]
] | In a recent paper, Marr, Muller-Linow and Hutt [Phys. Rev. E 75, 041917 (2007)] investigate an artificial dynamic system on metabolic networks. They find a less complex time evolution of this dynamic system in real networks, compared to networks of reference models. The authors argue that this suggests that metabolic network structure is a major factor behind the stability of biochemical steady states. We reanalyze the same kind of data using a dynamic system modeling actual reaction kinetics. The conclusions about stability, from our analysis, are inconsistent with those of Marr et al. We argue that this issue calls for a more detailed type of modeling. |
1509.06075 | Claudia Solis-Lemus | Claudia Sol\'is-Lemus and C\'ecile An\'e | Inferring phylogenetic networks with maximum pseudolikelihood under
incomplete lineage sorting | null | null | null | null | q-bio.PE math.ST stat.AP stat.CO stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phylogenetic networks are necessary to represent the tree of life expanded by
edges to represent events such as horizontal gene transfers, hybridizations or
gene flow. Not all species follow the paradigm of vertical inheritance of their
genetic material. While a great deal of research has flourished into the
inference of phylogenetic trees, statistical methods to infer phylogenetic
networks are still limited and under development. The main disadvantage of
existing methods is a lack of scalability. Here, we present a statistical
method to infer phylogenetic networks from multi-locus genetic data in a
pseudolikelihood framework. Our model accounts for incomplete lineage sorting
through the coalescent model, and for horizontal inheritance of genes through
reticulation nodes in the network. Computation of the pseudolikelihood is fast
and simple, and it avoids the burdensome calculation of the full likelihood
which can be intractable with many species. Moreover, estimation at the
quartet-level has the added computational benefit that it is easily
parallelizable. Simulation studies comparing our method to a full likelihood
approach show that our pseudolikelihood approach is much faster without
compromising accuracy. We applied our method to reconstruct the evolutionary
relationships among swordtails and platyfishes ($Xiphophorus$: Poeciliidae),
which is characterized by widespread hybridizations.
| [
{
"created": "Sun, 20 Sep 2015 23:41:03 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Jan 2016 20:48:47 GMT",
"version": "v2"
},
{
"created": "Fri, 12 Feb 2016 19:23:35 GMT",
"version": "v3"
}
] | 2016-02-15 | [
[
"Solís-Lemus",
"Claudia",
""
],
[
"Ané",
"Cécile",
""
]
] | Phylogenetic networks are necessary to represent the tree of life expanded by edges to represent events such as horizontal gene transfers, hybridizations or gene flow. Not all species follow the paradigm of vertical inheritance of their genetic material. While a great deal of research has flourished into the inference of phylogenetic trees, statistical methods to infer phylogenetic networks are still limited and under development. The main disadvantage of existing methods is a lack of scalability. Here, we present a statistical method to infer phylogenetic networks from multi-locus genetic data in a pseudolikelihood framework. Our model accounts for incomplete lineage sorting through the coalescent model, and for horizontal inheritance of genes through reticulation nodes in the network. Computation of the pseudolikelihood is fast and simple, and it avoids the burdensome calculation of the full likelihood which can be intractable with many species. Moreover, estimation at the quartet-level has the added computational benefit that it is easily parallelizable. Simulation studies comparing our method to a full likelihood approach show that our pseudolikelihood approach is much faster without compromising accuracy. We applied our method to reconstruct the evolutionary relationships among swordtails and platyfishes ($Xiphophorus$: Poeciliidae), which is characterized by widespread hybridizations. |
1410.0844 | Peter Butler | Hari S. Muddana, Samudra Sengupta, Ayusman Sen, Peter J. Butler | Enhanced brightness and photostability of cyanine dyes by supramolecular
containment | 20 pages, 5 figures | null | null | null | q-bio.BM physics.chem-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ultrasensitive detection and real-time monitoring of biological processes can
benefit significantly from the improved brightness and photostability of the
popular organic dyes such as cyanines. Here, using a model cyanine dye, Cy3, we
demonstrate that brightness and photostability of the dye is significantly
altered when trapped in a molecular container, e.g. cucurbit[n]urils (CB[n])
and cyclodextrins (CD).Through computational modeling, we predicted that Cy3
forms a stable inclusion complex with three different hosts, CB[7], beta-CD,
and methyl-beta-CD, which was further confirmed by single-molecule diffusion
measurements using fluorescence correlation spectroscopy. The effect of
supramolecular encapsulation on Cy3 photophysical properties was found to be
highly host-specific. Up to three-fold increase in brightness of Cy3 was
observed when the dye was trapped in methyl-beta-CD, due to an increase in both
dye absorption and quantum yield. Steady-state and time-resolved spectroscopy
of the various complexes revealed that host polarizability and restricted
mobility of the dye in the host both contribute to the observed increase in
molecular brightness. Furthermore, entrapment of the dye molecule in CDs
resulted in a marked increase in dye photostability, whereas the dye degraded
more rapidly in CB[7]. These results suggest that the changes in photophysical
properties of the dye afforded by supramolecular encapsulation are highly
dependent on the host molecule. The reported improvement in brightness and
photostability together with the excellent biocompatibility of cyclodextrins
makes supramolecular encapsulation a viable strategy for routine dye
enhancement.
| [
{
"created": "Thu, 2 Oct 2014 18:56:29 GMT",
"version": "v1"
}
] | 2014-10-06 | [
[
"Muddana",
"Hari S.",
""
],
[
"Sengupta",
"Samudra",
""
],
[
"Sen",
"Ayusman",
""
],
[
"Butler",
"Peter J.",
""
]
] | Ultrasensitive detection and real-time monitoring of biological processes can benefit significantly from the improved brightness and photostability of the popular organic dyes such as cyanines. Here, using a model cyanine dye, Cy3, we demonstrate that brightness and photostability of the dye is significantly altered when trapped in a molecular container, e.g. cucurbit[n]urils (CB[n]) and cyclodextrins (CD).Through computational modeling, we predicted that Cy3 forms a stable inclusion complex with three different hosts, CB[7], beta-CD, and methyl-beta-CD, which was further confirmed by single-molecule diffusion measurements using fluorescence correlation spectroscopy. The effect of supramolecular encapsulation on Cy3 photophysical properties was found to be highly host-specific. Up to three-fold increase in brightness of Cy3 was observed when the dye was trapped in methyl-beta-CD, due to an increase in both dye absorption and quantum yield. Steady-state and time-resolved spectroscopy of the various complexes revealed that host polarizability and restricted mobility of the dye in the host both contribute to the observed increase in molecular brightness. Furthermore, entrapment of the dye molecule in CDs resulted in a marked increase in dye photostability, whereas the dye degraded more rapidly in CB[7]. These results suggest that the changes in photophysical properties of the dye afforded by supramolecular encapsulation are highly dependent on the host molecule. The reported improvement in brightness and photostability together with the excellent biocompatibility of cyclodextrins makes supramolecular encapsulation a viable strategy for routine dye enhancement. |
1609.01569 | Liu Hong | Liu Hong, Chiu Fan Lee, Ya Jing Huang | Statistical Mechanics and Kinetics of Amyloid Fibrillation | 68 pages, 18 figures, 201 references | In Biophysics and Biochemistry of Protein Aggregation, edited by
J.-M. Yuan and H.-X. Zhou (World Scienti?c, 2017), Chapter 4, pp. 113-186 | 10.1142/9789813202382_0004 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Amyloid fibrillation is a protein self-assembly phenomenon that is intimately
related to well-known human neurodegenerative diseases. During the past few
decades, striking advances have been achieved in our understanding of the
physical origin of this phenomenon and they constitute the contents of this
review. Starting from a minimal model of amyloid fibrils, we explore
systematically the equilibrium and kinetic aspects of amyloid fibrillation in
both dilute and semi-dilute limits. We then incorporate further molecular
mechanisms into the analyses. We also discuss the mathematical foundation of
kinetic modeling based on chemical mass-action equations, the quantitative
linkage with experimental measurements, as well as the procedure to perform
global fitting.
| [
{
"created": "Tue, 6 Sep 2016 14:22:37 GMT",
"version": "v1"
}
] | 2017-09-06 | [
[
"Hong",
"Liu",
""
],
[
"Lee",
"Chiu Fan",
""
],
[
"Huang",
"Ya Jing",
""
]
] | Amyloid fibrillation is a protein self-assembly phenomenon that is intimately related to well-known human neurodegenerative diseases. During the past few decades, striking advances have been achieved in our understanding of the physical origin of this phenomenon and they constitute the contents of this review. Starting from a minimal model of amyloid fibrils, we explore systematically the equilibrium and kinetic aspects of amyloid fibrillation in both dilute and semi-dilute limits. We then incorporate further molecular mechanisms into the analyses. We also discuss the mathematical foundation of kinetic modeling based on chemical mass-action equations, the quantitative linkage with experimental measurements, as well as the procedure to perform global fitting. |
2405.08391 | Laurent Goffart | Laurent Goffart (CGGG) | Cerebralization of mathematical quantities and physical features in
neural science: a critical evaluation | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | At the turn of the 20th century, Henri Poincar{\'e} explained that geometry
is a convention and that the properties of space and time are the properties of
our measuring instruments. Intriguingly, numerous contemporary authors argue
that space, time and even number are ''encoded'' within the brain, as a
consequence of evolution, adaptation and natural selection. In the
neuroscientific study of movement generation, the activity of neurons would
''encode'' kinematic parameters: when they emit action potentials, neurons
would ''speak'' a language carrying notions of classical mechanics. In this
article, we shall explain that the movement of a body segment is the ultimate
product of a measurement, a filtered numerical outcome of multiple processes
taking place in parallel in the central nervous system and converging on the
groups of neurons responsible for muscle contractions. The fact that notions of
classical mechanics efficiently describe movements does not imply their
implementation in the inner workings of the brain. Their relevance to the
question how the brain activity enables one to produce accurate movements is
questioned within the framework of the neurophysiology of orienting gaze
movements toward a visual target.
| [
{
"created": "Tue, 14 May 2024 07:40:57 GMT",
"version": "v1"
},
{
"created": "Thu, 16 May 2024 09:08:23 GMT",
"version": "v2"
}
] | 2024-05-17 | [
[
"Goffart",
"Laurent",
"",
"CGGG"
]
] | At the turn of the 20th century, Henri Poincar{\'e} explained that geometry is a convention and that the properties of space and time are the properties of our measuring instruments. Intriguingly, numerous contemporary authors argue that space, time and even number are ''encoded'' within the brain, as a consequence of evolution, adaptation and natural selection. In the neuroscientific study of movement generation, the activity of neurons would ''encode'' kinematic parameters: when they emit action potentials, neurons would ''speak'' a language carrying notions of classical mechanics. In this article, we shall explain that the movement of a body segment is the ultimate product of a measurement, a filtered numerical outcome of multiple processes taking place in parallel in the central nervous system and converging on the groups of neurons responsible for muscle contractions. The fact that notions of classical mechanics efficiently describe movements does not imply their implementation in the inner workings of the brain. Their relevance to the question how the brain activity enables one to produce accurate movements is questioned within the framework of the neurophysiology of orienting gaze movements toward a visual target. |
0704.2554 | Yannick Brohard | Brigitte Meyer-Berthaud (AMAP), Anne-Laure Decombeix (AMAP) | A tree without leaves | null | Nature 446, 7138 (2006) 861-862 | 10.1038/446861a | A-07-09 | q-bio.PE | null | The puzzle presented by the famous stumps of Gilboa, New York, finds a
solution in the discovery of two fossil specimens that allow the entire
structure of these early trees to be reconstructed.
| [
{
"created": "Thu, 19 Apr 2007 15:11:51 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Meyer-Berthaud",
"Brigitte",
"",
"AMAP"
],
[
"Decombeix",
"Anne-Laure",
"",
"AMAP"
]
] | The puzzle presented by the famous stumps of Gilboa, New York, finds a solution in the discovery of two fossil specimens that allow the entire structure of these early trees to be reconstructed. |
2007.14807 | John Kolinski | John M. Kolinski, Tobias M. Schneider | Superspreading events suggest aerosol transmission of SARS-CoV-2 by
accumulation in enclosed spaces | 6 pages, 4 figures | Phys. Rev. E 103, 033109 (2021) | 10.1103/PhysRevE.103.033109 | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Viral transmission pathways have profound implications for public safety; it
is thus imperative to establish a complete understanding of viable infectious
avenues. Mounting evidence suggests SARS-CoV-2 can be transmitted via the air;
however, this has not yet been demonstrated. Here we quantitatively analyze
virion accumulation by accounting for aerosolized virion emission and
destabilization. Reported superspreading events analyzed within this framework
point towards aerosol mediated transmission of SARS-CoV-2. Virion exposure
calculated for these events is found to trace out a single value, suggesting a
universal minimum infective dose (MID) via aerosol that is comparable to the
MIDs measured for other respiratory viruses; thus, the consistent infectious
exposure levels and their commensurability to known aerosol-MIDs establishes
the plausibility of aerosol transmission of SARS-CoV-2. Using filtration at a
rate exceeding the destabilization rate of aerosolized SARS-CoV-2 can reduce
exposure below this infective dose.
| [
{
"created": "Wed, 29 Jul 2020 12:47:05 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Aug 2020 13:11:14 GMT",
"version": "v2"
},
{
"created": "Fri, 4 Dec 2020 11:16:02 GMT",
"version": "v3"
}
] | 2021-03-31 | [
[
"Kolinski",
"John M.",
""
],
[
"Schneider",
"Tobias M.",
""
]
] | Viral transmission pathways have profound implications for public safety; it is thus imperative to establish a complete understanding of viable infectious avenues. Mounting evidence suggests SARS-CoV-2 can be transmitted via the air; however, this has not yet been demonstrated. Here we quantitatively analyze virion accumulation by accounting for aerosolized virion emission and destabilization. Reported superspreading events analyzed within this framework point towards aerosol mediated transmission of SARS-CoV-2. Virion exposure calculated for these events is found to trace out a single value, suggesting a universal minimum infective dose (MID) via aerosol that is comparable to the MIDs measured for other respiratory viruses; thus, the consistent infectious exposure levels and their commensurability to known aerosol-MIDs establishes the plausibility of aerosol transmission of SARS-CoV-2. Using filtration at a rate exceeding the destabilization rate of aerosolized SARS-CoV-2 can reduce exposure below this infective dose. |
2308.16397 | Yufei Li | Yufei Li, Lingling Hou, Pengfei Liu | The Impact of Downgrading Protected Areas (PAD) on Biodiversity | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | We quantitatively assess the impacts of Downgrading Protected Areas (PAD) on
biodiversity in the U.S.. Results show that PAD events significantly reduce
biodiversity. The proximity to PAD events decreases the biodiversity by 26.0%
within 50 km compared with records of species further away from the PAD events.
We observe an overall 32.3% decrease in abundance after those nearest PAD
events are enacted. Abundance declines more in organisms living in contact with
water and non-mammals. Species abundance is more sensitive to the negative
impacts in areas where PAD events were later reversed, as well as in areas
close to protected areas belonging to the International Union for Conservation
of Nature (IUCN) category. The enacted PAD events between the period 1903 to
2018 in the U.S. lead to economic losses of approximately $689.95 million due
to decrease in abundance. Our results contribute to the understanding on the
impact of environmental interventions such as PAD events on biodiversity change
and provide important implications on biodiversity conservation policies.
| [
{
"created": "Thu, 31 Aug 2023 02:05:29 GMT",
"version": "v1"
}
] | 2023-09-01 | [
[
"Li",
"Yufei",
""
],
[
"Hou",
"Lingling",
""
],
[
"Liu",
"Pengfei",
""
]
] | We quantitatively assess the impacts of Downgrading Protected Areas (PAD) on biodiversity in the U.S.. Results show that PAD events significantly reduce biodiversity. The proximity to PAD events decreases the biodiversity by 26.0% within 50 km compared with records of species further away from the PAD events. We observe an overall 32.3% decrease in abundance after those nearest PAD events are enacted. Abundance declines more in organisms living in contact with water and non-mammals. Species abundance is more sensitive to the negative impacts in areas where PAD events were later reversed, as well as in areas close to protected areas belonging to the International Union for Conservation of Nature (IUCN) category. The enacted PAD events between the period 1903 to 2018 in the U.S. lead to economic losses of approximately $689.95 million due to decrease in abundance. Our results contribute to the understanding on the impact of environmental interventions such as PAD events on biodiversity change and provide important implications on biodiversity conservation policies. |
1911.05259 | Mariko I. Ito | Mariko I. Ito and Takaaki Ohnishi | Weighted Network Analysis of Biologically Relevant Chemical Spaces | 12 pages, 4 figures | null | null | null | q-bio.MN physics.soc-ph q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In cheminformatics, network representations of the space of compounds have
been suggested extensively. Among these, the threshold-network consists of
nodes representing molecules. In this network representation, two molecules are
connected by a link if the Tanimoto coefficient, a similarity measure, between
them exceeds a preset threshold. However, the topology of the threshold-network
is affected significantly by the preset threshold. In this study, we collected
the data of biologically relevant compounds and bioactivities. We defined the
weighted network where the weight of each link between the nodes equals the
Tanimoto coefficient between the bioactive compounds (nodes) without using the
threshold. We investigated the relationship between the strength of the link
connection and the bioactivity closeness in the weighted networks. We found
that compounds with significantly high or low bioactivity have a stronger
connection than those in the overall network.
| [
{
"created": "Wed, 13 Nov 2019 02:41:31 GMT",
"version": "v1"
}
] | 2019-11-14 | [
[
"Ito",
"Mariko I.",
""
],
[
"Ohnishi",
"Takaaki",
""
]
] | In cheminformatics, network representations of the space of compounds have been suggested extensively. Among these, the threshold-network consists of nodes representing molecules. In this network representation, two molecules are connected by a link if the Tanimoto coefficient, a similarity measure, between them exceeds a preset threshold. However, the topology of the threshold-network is affected significantly by the preset threshold. In this study, we collected the data of biologically relevant compounds and bioactivities. We defined the weighted network where the weight of each link between the nodes equals the Tanimoto coefficient between the bioactive compounds (nodes) without using the threshold. We investigated the relationship between the strength of the link connection and the bioactivity closeness in the weighted networks. We found that compounds with significantly high or low bioactivity have a stronger connection than those in the overall network. |
2105.10461 | Jeffrey Krichmar | Jeffrey L. Krichmar | Edelman's Steps Toward a Conscious Artifact | 7 pages, 1 figure, 1 table | null | null | null | q-bio.NC cs.AI | http://creativecommons.org/licenses/by/4.0/ | In 2006, during a meeting of a working group of scientists in La Jolla,
California at The Neurosciences Institute (NSI), Gerald Edelman described a
roadmap towards the creation of a Conscious Artifact. As far as I know, this
roadmap was not published. However, it did shape my thinking and that of many
others in the years since that meeting. This short paper, which is based on my
notes taken during the meeting, describes the key steps in this roadmap. I
believe it is as groundbreaking today as it was more than 15 years ago.
| [
{
"created": "Sat, 22 May 2021 00:13:06 GMT",
"version": "v1"
},
{
"created": "Tue, 25 May 2021 03:18:50 GMT",
"version": "v2"
}
] | 2021-05-26 | [
[
"Krichmar",
"Jeffrey L.",
""
]
] | In 2006, during a meeting of a working group of scientists in La Jolla, California at The Neurosciences Institute (NSI), Gerald Edelman described a roadmap towards the creation of a Conscious Artifact. As far as I know, this roadmap was not published. However, it did shape my thinking and that of many others in the years since that meeting. This short paper, which is based on my notes taken during the meeting, describes the key steps in this roadmap. I believe it is as groundbreaking today as it was more than 15 years ago. |
2009.09317 | Janani Ravi | Kewalin Samart, Phoebe Tuyishime, Arjun Krishnan, Janani Ravi | Reconciling Multiple Connectivity Scores for Drug Repurposing | KS and PT contributed equally to this work. Corresponding authors:
arjun@msu.edu (AK), janani@msu.edu (JR). Preprint contains 24 pages, 3
figures, 8 tables | null | null | null | q-bio.QM q-bio.GN | http://creativecommons.org/licenses/by-sa/4.0/ | The basis of several recent methods for drug repurposing is the key principle
that an efficacious drug will reverse the disease molecular 'signature' with
minimal side-effects. This principle was defined and popularized by the
influential 'connectivity map' study in 2006 regarding reversal relationships
between disease- and drug-induced gene expression profiles, quantified by a
disease-drug 'connectivity score.' Over the past 15 years, several studies have
proposed variations in calculating connectivity scores towards improving
accuracy and robustness in light of massive growth in reference drug profiles.
However, these variations have been formulated inconsistently using various
notations and terminologies even though they are based on a common set of
conceptual and statistical ideas. Therefore, we present a systematic
reconciliation of multiple disease-drug similarity metrics (ES, css, Sum,
Cosine, XSum, XCor, XSpe, XCos, EWCos) and connectivity scores (CS, RGES, NCS,
WCS, Tau, CSS, EMUDRA) by defining them using consistent notation and
terminology. In addition to providing clarity and deeper insights, this
coherent definition of connectivity scores and their relationships provides a
unified scheme that newer methods can adopt, enabling the computational
drug-development community to compare and investigate different approaches
easily. To facilitate the continuous and transparent integration of newer
methods, this article will be available as a live document
(https://jravilab.github.io/connectivity_scores) coupled with a GitHub
repository (https://github.com/jravilab/connectivity_scores) that any
researcher can build on and push changes to.
| [
{
"created": "Sat, 19 Sep 2020 23:01:37 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Oct 2020 20:57:41 GMT",
"version": "v2"
},
{
"created": "Sun, 28 Feb 2021 20:08:10 GMT",
"version": "v3"
}
] | 2021-03-02 | [
[
"Samart",
"Kewalin",
""
],
[
"Tuyishime",
"Phoebe",
""
],
[
"Krishnan",
"Arjun",
""
],
[
"Ravi",
"Janani",
""
]
] | The basis of several recent methods for drug repurposing is the key principle that an efficacious drug will reverse the disease molecular 'signature' with minimal side-effects. This principle was defined and popularized by the influential 'connectivity map' study in 2006 regarding reversal relationships between disease- and drug-induced gene expression profiles, quantified by a disease-drug 'connectivity score.' Over the past 15 years, several studies have proposed variations in calculating connectivity scores towards improving accuracy and robustness in light of massive growth in reference drug profiles. However, these variations have been formulated inconsistently using various notations and terminologies even though they are based on a common set of conceptual and statistical ideas. Therefore, we present a systematic reconciliation of multiple disease-drug similarity metrics (ES, css, Sum, Cosine, XSum, XCor, XSpe, XCos, EWCos) and connectivity scores (CS, RGES, NCS, WCS, Tau, CSS, EMUDRA) by defining them using consistent notation and terminology. In addition to providing clarity and deeper insights, this coherent definition of connectivity scores and their relationships provides a unified scheme that newer methods can adopt, enabling the computational drug-development community to compare and investigate different approaches easily. To facilitate the continuous and transparent integration of newer methods, this article will be available as a live document (https://jravilab.github.io/connectivity_scores) coupled with a GitHub repository (https://github.com/jravilab/connectivity_scores) that any researcher can build on and push changes to. |
1703.02850 | Shixiang Wan | Quan Zou, Shixiang Wan, Ying Ju, Jijun Tang and Xiangxiang Zeng | Pretata: predicting TATA binding proteins with novel features and
dimensionality reduction strategy | null | null | null | null | q-bio.QM cs.LG q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: It is necessary and essential to discovery protein function from
the novel primary sequences. Wet lab experimental procedures are not only
time-consuming, but also costly, so predicting protein structure and function
reliably based only on amino acid sequence has significant value. TATA-binding
protein (TBP) is a kind of DNA binding protein, which plays a key role in the
transcription regulation. Our study proposed an automatic approach for
identifying TATA-binding proteins efficiently, accurately, and conveniently.
This method would guide for the special protein identification with
computational intelligence strategies. Results: Firstly, we proposed novel
fingerprint features for TBP based on pseudo amino acid composition,
physicochemical properties, and secondary structure. Secondly, hierarchical
features dimensionality reduction strategies were employed to improve the
performance furthermore. Currently, Pretata achieves 92.92% TATA- binding
protein prediction accuracy, which is better than all other existing methods.
Conclusions: The experiments demonstrate that our method could greatly improve
the prediction accuracy and speed, thus allowing large-scale NGS data
prediction to be practical. A web server is developed to facilitate the other
researchers, which can be accessed at http://server.malab.cn/preTata/.
| [
{
"created": "Tue, 7 Mar 2017 13:48:46 GMT",
"version": "v1"
}
] | 2017-03-09 | [
[
"Zou",
"Quan",
""
],
[
"Wan",
"Shixiang",
""
],
[
"Ju",
"Ying",
""
],
[
"Tang",
"Jijun",
""
],
[
"Zeng",
"Xiangxiang",
""
]
] | Background: It is necessary and essential to discovery protein function from the novel primary sequences. Wet lab experimental procedures are not only time-consuming, but also costly, so predicting protein structure and function reliably based only on amino acid sequence has significant value. TATA-binding protein (TBP) is a kind of DNA binding protein, which plays a key role in the transcription regulation. Our study proposed an automatic approach for identifying TATA-binding proteins efficiently, accurately, and conveniently. This method would guide for the special protein identification with computational intelligence strategies. Results: Firstly, we proposed novel fingerprint features for TBP based on pseudo amino acid composition, physicochemical properties, and secondary structure. Secondly, hierarchical features dimensionality reduction strategies were employed to improve the performance furthermore. Currently, Pretata achieves 92.92% TATA- binding protein prediction accuracy, which is better than all other existing methods. Conclusions: The experiments demonstrate that our method could greatly improve the prediction accuracy and speed, thus allowing large-scale NGS data prediction to be practical. A web server is developed to facilitate the other researchers, which can be accessed at http://server.malab.cn/preTata/. |
1302.4693 | Peter Combs | Peter A. Combs, Michael B. Eisen | Virtual in situs: Sequencing mRNA from cryo-sliced Drosophila embryos to
determine genome-wide spatial patterns of gene expression | 6 pages, 3 figures, 7 supplemental figures (available on request from
peter.combs@berkeley.edu) | PLoS ONE 8(8): e71820 2013 | 10.1371/journal.pone.0071820 | null | q-bio.GN | http://creativecommons.org/licenses/by/3.0/ | Complex spatial and temporal patterns of gene expression underlie embryo
differentiation, yet methods do not yet exist for the efficient genome-wide
determination of spatial expression patterns during development. In situ
imaging of transcripts and proteins is the gold-standard, but it is difficult
and time consuming to apply to an entire genome, even when highly automated.
Sequencing, in contrast, is fast and genome-wide, but is generally applied to
homogenized tissues, thereby discarding spatial information. It is likely that
these methods will ultimately converge, and we will be able to sequence RNAs in
situ, simultaneously determining their identity and location. As a step along
this path, we developed methods to cryosection individual blastoderm stage
Drosophila melanogaster embryos along the anterior-posterior axis and sequence
the mRNA isolated from each 25 micron slice. The spatial patterns of gene
expression we infer closely match patterns previously determined by in situ
hybridization and microscopy. We applied this method to generate a genome-wide
timecourse of spatial gene expression from shortly after fertilization through
gastrulation. We identify numerous genes with spatial patterns that have not
yet been described in the several ongoing systematic in situ based projects.
This simple experiment demonstrates the potential for combining careful
anatomical dissection with high-throughput sequencing to obtain spatially
resolved gene expression on a genome-wide scale.
| [
{
"created": "Tue, 19 Feb 2013 17:44:59 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Feb 2013 18:28:21 GMT",
"version": "v2"
},
{
"created": "Mon, 15 Apr 2013 23:37:01 GMT",
"version": "v3"
},
{
"created": "Thu, 23 May 2013 22:49:57 GMT",
"version": "v4"
}
] | 2013-08-15 | [
[
"Combs",
"Peter A.",
""
],
[
"Eisen",
"Michael B.",
""
]
] | Complex spatial and temporal patterns of gene expression underlie embryo differentiation, yet methods do not yet exist for the efficient genome-wide determination of spatial expression patterns during development. In situ imaging of transcripts and proteins is the gold-standard, but it is difficult and time consuming to apply to an entire genome, even when highly automated. Sequencing, in contrast, is fast and genome-wide, but is generally applied to homogenized tissues, thereby discarding spatial information. It is likely that these methods will ultimately converge, and we will be able to sequence RNAs in situ, simultaneously determining their identity and location. As a step along this path, we developed methods to cryosection individual blastoderm stage Drosophila melanogaster embryos along the anterior-posterior axis and sequence the mRNA isolated from each 25 micron slice. The spatial patterns of gene expression we infer closely match patterns previously determined by in situ hybridization and microscopy. We applied this method to generate a genome-wide timecourse of spatial gene expression from shortly after fertilization through gastrulation. We identify numerous genes with spatial patterns that have not yet been described in the several ongoing systematic in situ based projects. This simple experiment demonstrates the potential for combining careful anatomical dissection with high-throughput sequencing to obtain spatially resolved gene expression on a genome-wide scale. |
2302.07140 | Ernest Greene | Ernest Greene and Jack Morrison | Evaluating the Talbot-Plateau Law | 34 pages, five figures | null | null | null | q-bio.NC | http://creativecommons.org/publicdomain/zero/1.0/ | The Talbot-Plateau law asserts that when the flux (light energy) of a
flicker-fused stimulus equals the flux of a steady stimulus, they will appear
equal in brightness. To be perceived as flicker-fused, the frequency of the
flash sequence must be high enough that no flicker is perceived, i.e., it
appears to be a steady stimulus. Generally, this law has been accepted as being
true across all brightness levels, and across all combinations of flash
duration and frequency that generate the matching flux level. Two experiments
that were conducted to test the law found significant departures from its
predictions, but these were small relative to the large range of flash
intensities that were tested.
| [
{
"created": "Mon, 6 Feb 2023 15:38:27 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Apr 2023 16:32:35 GMT",
"version": "v2"
}
] | 2023-04-28 | [
[
"Greene",
"Ernest",
""
],
[
"Morrison",
"Jack",
""
]
] | The Talbot-Plateau law asserts that when the flux (light energy) of a flicker-fused stimulus equals the flux of a steady stimulus, they will appear equal in brightness. To be perceived as flicker-fused, the frequency of the flash sequence must be high enough that no flicker is perceived, i.e., it appears to be a steady stimulus. Generally, this law has been accepted as being true across all brightness levels, and across all combinations of flash duration and frequency that generate the matching flux level. Two experiments that were conducted to test the law found significant departures from its predictions, but these were small relative to the large range of flash intensities that were tested. |
q-bio/0410018 | Kevin E. Cahill | Kevin Cahill and V. Adrian Parsegian | Rydberg-London Potential for Diatomic Molecules and Unbonded Atom Pairs | Five pages, 10 figures | Journal of Chemical Physics 121 (#22), 10839-10842, 2004 | 10.1063/1.1830011 | null | q-bio.BM q-bio.QM | null | We propose and test a pair potential that is accurate at all relevant
distances and simple enough for use in large-scale computer simulations. A
combination of the Rydberg potential from spectroscopy and the London
inverse-sixth-power energy, the proposed form fits spectroscopically determined
potentials better than the Morse, Varnshi, and Hulburt-Hirschfelder potentials
and much better than the Lennard-Jones and harmonic potentials. At long
distances, it goes smoothly to the correct London force appropriate for gases
and preserves van der Waals's "continuity of the gas and liquid states," which
is routinely violated by coefficients assigned to the Lennard-Jones 6-12 form.
| [
{
"created": "Sun, 17 Oct 2004 23:03:30 GMT",
"version": "v1"
}
] | 2009-11-10 | [
[
"Cahill",
"Kevin",
""
],
[
"Parsegian",
"V. Adrian",
""
]
] | We propose and test a pair potential that is accurate at all relevant distances and simple enough for use in large-scale computer simulations. A combination of the Rydberg potential from spectroscopy and the London inverse-sixth-power energy, the proposed form fits spectroscopically determined potentials better than the Morse, Varnshi, and Hulburt-Hirschfelder potentials and much better than the Lennard-Jones and harmonic potentials. At long distances, it goes smoothly to the correct London force appropriate for gases and preserves van der Waals's "continuity of the gas and liquid states," which is routinely violated by coefficients assigned to the Lennard-Jones 6-12 form. |
1210.1237 | Sean Stromberg | Sean P Stromberg | Multisite Population Epigenetic Model of Passive Demethylation | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The methylation of DNA regulates gene expression. On cell division the
methylation state of the DNA is typically inherited from parent to daughter
cells. While the chemical bond between the methyl group and the DNA is very
strong, changes to the methylation state do occur and are observed to occur
rapidly in response to external stimulus. The loss of methylation can be active
where enzyme physically breaks the bond, or passive where on cell division the
newly constructed strand of DNA is not properly inherited.
Here we present a mathematical model of single locus passive demethylation
for a dividing population of cells. The model describes the heterogenity in the
population expected from passive mechanisms. We see that even when the site
specific probabilities of passive demethylation are independent, conservation
of methylation on the inherited strand gives rise to site-site correlations of
the methylation state. We then extend the model to incorporate correlations
between sites in the locus for demethylation rates. Biologically, correlations
in demethylation rates might correspond to locus wide changes such as the
inability of methyltransferase to access the locus. We also look at the effects
of selection on the multicellular population.
The model of passive demethylation not only provides a tool for measurement
of parameters in loci-specific cases where passive demethylation is the
dominant mechanism, but also provides a baseline in the search for active
mechanisms. The model tells us that there are states of methylation
inaccessible by passive mechanisms. Observation of these states constitutes
evidence of active mechanisms, either de novo methylation or enzymatic
demethylation. We also see that selection and passive demethylation combined
can give rise to a stable heterogeneous distribution of gene methylation states
in a population.
| [
{
"created": "Wed, 3 Oct 2012 20:54:35 GMT",
"version": "v1"
}
] | 2012-10-05 | [
[
"Stromberg",
"Sean P",
""
]
] | The methylation of DNA regulates gene expression. On cell division the methylation state of the DNA is typically inherited from parent to daughter cells. While the chemical bond between the methyl group and the DNA is very strong, changes to the methylation state do occur and are observed to occur rapidly in response to external stimulus. The loss of methylation can be active where enzyme physically breaks the bond, or passive where on cell division the newly constructed strand of DNA is not properly inherited. Here we present a mathematical model of single locus passive demethylation for a dividing population of cells. The model describes the heterogenity in the population expected from passive mechanisms. We see that even when the site specific probabilities of passive demethylation are independent, conservation of methylation on the inherited strand gives rise to site-site correlations of the methylation state. We then extend the model to incorporate correlations between sites in the locus for demethylation rates. Biologically, correlations in demethylation rates might correspond to locus wide changes such as the inability of methyltransferase to access the locus. We also look at the effects of selection on the multicellular population. The model of passive demethylation not only provides a tool for measurement of parameters in loci-specific cases where passive demethylation is the dominant mechanism, but also provides a baseline in the search for active mechanisms. The model tells us that there are states of methylation inaccessible by passive mechanisms. Observation of these states constitutes evidence of active mechanisms, either de novo methylation or enzymatic demethylation. We also see that selection and passive demethylation combined can give rise to a stable heterogeneous distribution of gene methylation states in a population. |
2311.18134 | Sanjeev Namjoshi | Roy E. Clymer and Sanjeev V. Namjoshi | A computational model of behavioral adaptation to solve the credit
assignment problem | 18 pages, 9 figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The adaptive fitness of an organism in its ecological niche is highly reliant
upon its ability to associate an environmental or internal stimulus with a
behavior response through reinforcement. This simple but powerful observation
has been successfully applied in a number of contexts within computational
neuroscience and reinforcement learning to model both human and animal
behaviors. However, a critical challenge faced by these models is the credit
assignment problem which asks how past behavior comes to be associated with a
delayed reinforcement signal. In this paper we reformulate the credit
assignment problem to ask how past stimuli come to be linked to adaptive
behavioral responses in the context of a simple neuronal circuit. We propose a
biologically plausible variant of a spiking neural network which can model a
wide variety of behavioral, learning, and evolutionary phenomena. Our model
suggests one fundamental mechanism, potentially in use in the brains of both
simple and complex organisms, that would allow it to associate a behavior with
an adaptive response. We present results that showcase the model's versatility
and biological plausibility in a number of tasks related to classical and
operant conditioning including behavioral chaining. We then provide further
simulations to demonstrate how adaptive behaviors such as reflexes and simple
category detection may have evolved using our model. Our results indicate the
potential for further modifications and extensions of our model to replicate
more sophisticated and biologically plausible behavioral, learning, and
intelligence phenomena found throughout the animal kingdom.
| [
{
"created": "Wed, 29 Nov 2023 22:53:28 GMT",
"version": "v1"
}
] | 2023-12-01 | [
[
"Clymer",
"Roy E.",
""
],
[
"Namjoshi",
"Sanjeev V.",
""
]
] | The adaptive fitness of an organism in its ecological niche is highly reliant upon its ability to associate an environmental or internal stimulus with a behavior response through reinforcement. This simple but powerful observation has been successfully applied in a number of contexts within computational neuroscience and reinforcement learning to model both human and animal behaviors. However, a critical challenge faced by these models is the credit assignment problem which asks how past behavior comes to be associated with a delayed reinforcement signal. In this paper we reformulate the credit assignment problem to ask how past stimuli come to be linked to adaptive behavioral responses in the context of a simple neuronal circuit. We propose a biologically plausible variant of a spiking neural network which can model a wide variety of behavioral, learning, and evolutionary phenomena. Our model suggests one fundamental mechanism, potentially in use in the brains of both simple and complex organisms, that would allow it to associate a behavior with an adaptive response. We present results that showcase the model's versatility and biological plausibility in a number of tasks related to classical and operant conditioning including behavioral chaining. We then provide further simulations to demonstrate how adaptive behaviors such as reflexes and simple category detection may have evolved using our model. Our results indicate the potential for further modifications and extensions of our model to replicate more sophisticated and biologically plausible behavioral, learning, and intelligence phenomena found throughout the animal kingdom. |
2201.10960 | Jeremy Clark | Jeremy S. C. Clark, Anna Salacka, Agnieszka Boron, Thierry van de
Wetering, Konrad Podsiadlo, Kamila Rydzewska, Krzysztof Safranow, Kazimierz
Ciechanowski, Leszek Domanski, Andrzej Ciechanowicz | Repetition and reproduction of preclinical medical studies: taking a
leaf from the plant sciences with consideration of generalised systematic
errors | 40 pages, 1 figure, 3 tables. Supplemental files at
https://github.com/Abiologist/Significance.git | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Reproduction of pre-clinical results has a high failure rate. The fundamental
methodology including replication ("protocol") for hypothesis
testing/validation to a state allowing inference, varies within medical and
plant sciences with little justification. Here, five protocols are
distinguished which deal differently with systematic/random errors and vary
considerably in result veracity. Aim: to compare prevalence of protocols
(defined in text). Medical/plant science articles from 2017/2019 were surveyed:
713 random articles assessed for eligibility for counts: first (with p-values):
1) non-replicated; 2) global; 3) triple-result protocols; second: 4)
replication-error protocol; 5) meta-analyses. Inclusion criteria:
human/plant/fungal studies with categorical groups. Exclusion criteria: phased
clinical trials, pilot studies, cases, reviews, technology, rare subjects,
-omic studies. Abbreviated PICOS question: which protocol was evident for a
main result with categorically distinct group difference(s) ? Electronic
sources: Journal Citation Reports 2017/2019, Google. Triplication prevalence
differed dramatically between sciences (both years p<10-16; cluster-adjusted
chi-squared tests): From 320 studies (80/science/year): in 2017, 53 (66%, 95%
confidence interval (C.I.) 56%:77%) and in 2019, 48 (60%, C.I. 49%:71%) plant
studies had triple-result or triplicated global protocols, compared with, in
both years, 4 (5%, C.I. 0.19%:9.8%) medical studies. Plant sciences had a
higher prevalence of protocols more likely to counter generalised systematic
errors (the most likely cause of false positives) and random error than
non-replicated protocols, without suffering from serious flaws found with
random-Institutes protocols. It is suggested that a triple-result
(organised-reproduction) protocol, with Institute consortia, is likely to solve
most problems connected with the replicability crisis.
| [
{
"created": "Wed, 26 Jan 2022 14:24:34 GMT",
"version": "v1"
}
] | 2022-01-27 | [
[
"Clark",
"Jeremy S. C.",
""
],
[
"Salacka",
"Anna",
""
],
[
"Boron",
"Agnieszka",
""
],
[
"van de Wetering",
"Thierry",
""
],
[
"Podsiadlo",
"Konrad",
""
],
[
"Rydzewska",
"Kamila",
""
],
[
"Safranow",
"Krzysztof",
""
],
[
"Ciechanowski",
"Kazimierz",
""
],
[
"Domanski",
"Leszek",
""
],
[
"Ciechanowicz",
"Andrzej",
""
]
] | Reproduction of pre-clinical results has a high failure rate. The fundamental methodology including replication ("protocol") for hypothesis testing/validation to a state allowing inference, varies within medical and plant sciences with little justification. Here, five protocols are distinguished which deal differently with systematic/random errors and vary considerably in result veracity. Aim: to compare prevalence of protocols (defined in text). Medical/plant science articles from 2017/2019 were surveyed: 713 random articles assessed for eligibility for counts: first (with p-values): 1) non-replicated; 2) global; 3) triple-result protocols; second: 4) replication-error protocol; 5) meta-analyses. Inclusion criteria: human/plant/fungal studies with categorical groups. Exclusion criteria: phased clinical trials, pilot studies, cases, reviews, technology, rare subjects, -omic studies. Abbreviated PICOS question: which protocol was evident for a main result with categorically distinct group difference(s) ? Electronic sources: Journal Citation Reports 2017/2019, Google. Triplication prevalence differed dramatically between sciences (both years p<10-16; cluster-adjusted chi-squared tests): From 320 studies (80/science/year): in 2017, 53 (66%, 95% confidence interval (C.I.) 56%:77%) and in 2019, 48 (60%, C.I. 49%:71%) plant studies had triple-result or triplicated global protocols, compared with, in both years, 4 (5%, C.I. 0.19%:9.8%) medical studies. Plant sciences had a higher prevalence of protocols more likely to counter generalised systematic errors (the most likely cause of false positives) and random error than non-replicated protocols, without suffering from serious flaws found with random-Institutes protocols. It is suggested that a triple-result (organised-reproduction) protocol, with Institute consortia, is likely to solve most problems connected with the replicability crisis. |
2402.18602 | Shin-Ichi Ito | Tomoya Aono, Tatsuya Sakamoto, Toyoho Ishimura, Motomitsu Takahashi,
Tohya Yasuda, Satoshi Kitajima, Kozue Nishida, Takayoshi Matsuura, Akito
Ikari, Shin-ichi Ito | Estimation of migrate histories of the Japanese sardine in the Sea of
Japan by combining the microscale stable isotope analysis of otoliths and a
data assimilation model | 35 pages including 8 figures and 4 supplementary figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | The Japanese sardine (Sardinops melanostictus) is a small pelagic fish found
in the Sea of Japan, the marginal sea of the western North Pacific. It is an
important species for regional fisheries, but their transportation and
migration patterns during early life stages remain unclear. In this study, we
analyzed the stable oxygen isotope ratios of otoliths of young-of-the-year (age
0) Japanese sardines collected from the northern offshore and southern coastal
areas of the Sea of Japan in 2015 and 2016. The ontogenetic shifts of the
geographic distribution were estimated by comparing the profiles of life-long
isotope ratios and temporally varying isoscape, which was calculated using the
temperature and salinity fields produced by an ocean data assimilation model.
Individuals that were collected in the northern and southern areas hatched and
stayed in the southern areas (west offshore of Kyushu) until late June, and
thereafter, they can be distinguished into two groups: one that migrated
northward at shallow layer and one that stayed around the southern area in the
deep layer. A comparison of somatic growth trajectories of the two groups,
which was reconstructed based on otolith microstructure analysis, suggested
that individuals that migrated northward had significantly larger body lengths
in late June than those that stayed in the southern area. These results
indicate that young-of-the-year Japanese sardines that hatched in the southern
area may have been forced to choose one of two strategies to avoid extremely
high water temperatures within seasonal and geographical limits. These include
migrating northward or moving to deeper layers. Our results indicate that the
environmental variabilities in the southern area could critically impact
sardine population dynamics in the Sea of Japan.
| [
{
"created": "Wed, 28 Feb 2024 02:13:43 GMT",
"version": "v1"
}
] | 2024-03-01 | [
[
"Aono",
"Tomoya",
""
],
[
"Sakamoto",
"Tatsuya",
""
],
[
"Ishimura",
"Toyoho",
""
],
[
"Takahashi",
"Motomitsu",
""
],
[
"Yasuda",
"Tohya",
""
],
[
"Kitajima",
"Satoshi",
""
],
[
"Nishida",
"Kozue",
""
],
[
"Matsuura",
"Takayoshi",
""
],
[
"Ikari",
"Akito",
""
],
[
"Ito",
"Shin-ichi",
""
]
] | The Japanese sardine (Sardinops melanostictus) is a small pelagic fish found in the Sea of Japan, the marginal sea of the western North Pacific. It is an important species for regional fisheries, but their transportation and migration patterns during early life stages remain unclear. In this study, we analyzed the stable oxygen isotope ratios of otoliths of young-of-the-year (age 0) Japanese sardines collected from the northern offshore and southern coastal areas of the Sea of Japan in 2015 and 2016. The ontogenetic shifts of the geographic distribution were estimated by comparing the profiles of life-long isotope ratios and temporally varying isoscape, which was calculated using the temperature and salinity fields produced by an ocean data assimilation model. Individuals that were collected in the northern and southern areas hatched and stayed in the southern areas (west offshore of Kyushu) until late June, and thereafter, they can be distinguished into two groups: one that migrated northward at shallow layer and one that stayed around the southern area in the deep layer. A comparison of somatic growth trajectories of the two groups, which was reconstructed based on otolith microstructure analysis, suggested that individuals that migrated northward had significantly larger body lengths in late June than those that stayed in the southern area. These results indicate that young-of-the-year Japanese sardines that hatched in the southern area may have been forced to choose one of two strategies to avoid extremely high water temperatures within seasonal and geographical limits. These include migrating northward or moving to deeper layers. Our results indicate that the environmental variabilities in the southern area could critically impact sardine population dynamics in the Sea of Japan. |
1705.09156 | Christopher Buckley | Christopher L. Buckley, Chang Sub Kim, Simon McGregor and Anil K. Seth | The free energy principle for action and perception: A mathematical
review | 77 pages 2 fugures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The 'free energy principle' (FEP) has been suggested to provide a unified
theory of the brain, integrating data and theory relating to action,
perception, and learning. The theory and implementation of the FEP combines
insights from Helmholtzian 'perception as inference', machine learning theory,
and statistical thermodynamics. Here, we provide a detailed mathematical
evaluation of a suggested biologically plausible implementation of the FEP that
has been widely used to develop the theory. Our objectives are (i) to describe
within a single article the mathematical structure of this implementation of
the FEP; (ii) provide a simple but complete agent-based model utilising the
FEP; (iii) disclose the assumption structure of this implementation of the FEP
to help elucidate its significance for the brain sciences.
| [
{
"created": "Wed, 24 May 2017 11:40:27 GMT",
"version": "v1"
}
] | 2017-05-26 | [
[
"Buckley",
"Christopher L.",
""
],
[
"Kim",
"Chang Sub",
""
],
[
"McGregor",
"Simon",
""
],
[
"Seth",
"Anil K.",
""
]
] | The 'free energy principle' (FEP) has been suggested to provide a unified theory of the brain, integrating data and theory relating to action, perception, and learning. The theory and implementation of the FEP combines insights from Helmholtzian 'perception as inference', machine learning theory, and statistical thermodynamics. Here, we provide a detailed mathematical evaluation of a suggested biologically plausible implementation of the FEP that has been widely used to develop the theory. Our objectives are (i) to describe within a single article the mathematical structure of this implementation of the FEP; (ii) provide a simple but complete agent-based model utilising the FEP; (iii) disclose the assumption structure of this implementation of the FEP to help elucidate its significance for the brain sciences. |
1612.08790 | Su-Chan Park | Sungmin Hwang, Su-Chan Park, Joachim Krug | Genotypic complexity of Fisher's geometric model | 27 pages, 14 figures, 2 tables, minor changes (published version) | Genetics 206, 1049 (2017) | 10.1534/genetics.116.199497 | null | q-bio.PE cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fisher's geometric model was originally introduced to argue that complex
adaptations must occur in small steps because of pleiotropic constraints. When
supplemented with the assumption of additivity of mutational effects on
phenotypic traits, it provides a simple mechanism for the emergence of
genotypic epistasis from the nonlinear mapping of phenotypes to fitness. Of
particular interest is the occurrence of reciprocal sign epistasis, which is a
necessary condition for multipeaked genotypic fitness landscapes. Here we
compute the probability that a pair of randomly chosen mutations interacts sign
epistatically, which is found to decrease with increasing phenotypic dimension
$n$, and varies nonmonotonically with the distance from the phenotypic optimum.
We then derive expressions for the mean number of fitness maxima in genotypic
landscapes comprised of all combinations of $L$ random mutations. This number
increases exponentially with $L$, and the corresponding growth rate is used as
a measure of the complexity of the landscape. The dependence of the complexity
on the model parameters is found to be surprisingly rich, and three distinct
phases characterized by different landscape structures are identified. Our
analysis shows that the phenotypic dimension, which is often referred to as
phenotypic complexity, does not generally correlate with the complexity of
fitness landscapes and that even organisms with a single phenotypic trait can
have complex landscapes. Our results further inform the interpretation of
experiments where the parameters of Fisher's model have been inferred from
data, and help to elucidate which features of empirical fitness landscapes can
be described by this model.
| [
{
"created": "Wed, 28 Dec 2016 02:32:14 GMT",
"version": "v1"
},
{
"created": "Wed, 29 Mar 2017 01:22:30 GMT",
"version": "v2"
},
{
"created": "Fri, 9 Jun 2017 05:14:16 GMT",
"version": "v3"
}
] | 2017-06-12 | [
[
"Hwang",
"Sungmin",
""
],
[
"Park",
"Su-Chan",
""
],
[
"Krug",
"Joachim",
""
]
] | Fisher's geometric model was originally introduced to argue that complex adaptations must occur in small steps because of pleiotropic constraints. When supplemented with the assumption of additivity of mutational effects on phenotypic traits, it provides a simple mechanism for the emergence of genotypic epistasis from the nonlinear mapping of phenotypes to fitness. Of particular interest is the occurrence of reciprocal sign epistasis, which is a necessary condition for multipeaked genotypic fitness landscapes. Here we compute the probability that a pair of randomly chosen mutations interacts sign epistatically, which is found to decrease with increasing phenotypic dimension $n$, and varies nonmonotonically with the distance from the phenotypic optimum. We then derive expressions for the mean number of fitness maxima in genotypic landscapes comprised of all combinations of $L$ random mutations. This number increases exponentially with $L$, and the corresponding growth rate is used as a measure of the complexity of the landscape. The dependence of the complexity on the model parameters is found to be surprisingly rich, and three distinct phases characterized by different landscape structures are identified. Our analysis shows that the phenotypic dimension, which is often referred to as phenotypic complexity, does not generally correlate with the complexity of fitness landscapes and that even organisms with a single phenotypic trait can have complex landscapes. Our results further inform the interpretation of experiments where the parameters of Fisher's model have been inferred from data, and help to elucidate which features of empirical fitness landscapes can be described by this model. |
1602.01612 | Michael Sheinman | Michael Sheinman and Anna Ramisch and Florian Massip and Peter F.
Arndt | Evolutionary dynamics of selfish DNA generates pseudo-linguistic
features of genomes | 9 pages, 5 figures | Scientific Reports 6, Article number: 30851 (2016) | 10.1038/srep30851 | null | q-bio.PE physics.bio-ph q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since the sequencing of large genomes, many statistical features of their
sequences have been found. One intriguing feature is that certain subsequences
are much more abundant than others. In fact, abundances of subsequences of a
given length are distributed with a scale-free power-law tail, resembling
properties of human texts, such as the Zipf's law. Despite recent efforts, the
understanding of this phenomenon is still lacking. Here we find that selfish
DNA elements, such as those belonging to the Alu family of repeats, dominate
the power-law tail. Interestingly, for the Alu elements the power-law exponent
increases with the length of the considered subsequences. Motivated by these
observations, we develop a model of selfish DNA expansion. The predictions of
this model qualitatively and quantitatively agree with the empirical
observations. This allows us to estimate parameters for the process of selfish
DNA spreading in a genome during its evolution. The obtained results shed light
on how evolution of selfish DNA elements shapes non-trivial statistical
properties of genomes.
| [
{
"created": "Thu, 4 Feb 2016 10:08:04 GMT",
"version": "v1"
}
] | 2016-08-05 | [
[
"Sheinman",
"Michael",
""
],
[
"Ramisch",
"Anna",
""
],
[
"Massip",
"Florian",
""
],
[
"Arndt",
"Peter F.",
""
]
] | Since the sequencing of large genomes, many statistical features of their sequences have been found. One intriguing feature is that certain subsequences are much more abundant than others. In fact, abundances of subsequences of a given length are distributed with a scale-free power-law tail, resembling properties of human texts, such as the Zipf's law. Despite recent efforts, the understanding of this phenomenon is still lacking. Here we find that selfish DNA elements, such as those belonging to the Alu family of repeats, dominate the power-law tail. Interestingly, for the Alu elements the power-law exponent increases with the length of the considered subsequences. Motivated by these observations, we develop a model of selfish DNA expansion. The predictions of this model qualitatively and quantitatively agree with the empirical observations. This allows us to estimate parameters for the process of selfish DNA spreading in a genome during its evolution. The obtained results shed light on how evolution of selfish DNA elements shapes non-trivial statistical properties of genomes. |
2208.10403 | Somya Mehra | Somya Mehra, Peter G. Taylor, James M. McCaw, Jennifer A. Flegg | A hybrid transmission model for Plasmodium vivax accounting for
superinfection, immunity and the hypnozoite reservoir | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Malaria is a vector-borne disease that exacts a grave toll in the Global
South. The epidemiology of Plasmodium vivax, the most geographically expansive
agent of human malaria, is characterised by the accrual of a reservoir of
dormant parasites known as hypnozoites. Relapses, arising from hypnozoite
activation events, comprise the majority of the blood-stage infection burden,
with implications for the acquisition of immunity and the distribution of
superinfection. Here, we construct a hybrid transmission model for P. vivax
that concurrently accounts for the accrual of the hypnozoite reservoir,
(blood-stage) superinfection and the acquisition of immunity. We begin by
analytically characterising within-host dynamics as a function of
mosquito-to-human transmission intensity, extending our previous model
(comprising an open network of infinite server queues) to capture a discretised
immunity level. To model transmission-blocking and antidisease immunity, we
allow for geometric decay in the respective probabilities of successful
human-to-mosquito transmission and symptomatic blood-stage infection as a
function of this immunity level. Under a hybrid approximation -- whereby
probabilistic within-host distributions are cast as expected population-level
proportions -- we couple host and vector dynamics to recover a deterministic
compartmental model in line with Ross-Macdonald theory. We then perform a
steady-state analysis for this compartmental model, informed by the (analytic)
distributions derived at the within-host level. To characterise transient
dynamics, we derive a reduced system of integrodifferential equations (IDEs),
likewise informed by our within-host queueing network, allowing us to recover
population-level distributions for various quantities of epidemiological
interest. Our model provides insights into important, but poorly understood,
epidemiological features of P. vivax.
| [
{
"created": "Mon, 22 Aug 2022 15:41:08 GMT",
"version": "v1"
}
] | 2022-08-23 | [
[
"Mehra",
"Somya",
""
],
[
"Taylor",
"Peter G.",
""
],
[
"McCaw",
"James M.",
""
],
[
"Flegg",
"Jennifer A.",
""
]
] | Malaria is a vector-borne disease that exacts a grave toll in the Global South. The epidemiology of Plasmodium vivax, the most geographically expansive agent of human malaria, is characterised by the accrual of a reservoir of dormant parasites known as hypnozoites. Relapses, arising from hypnozoite activation events, comprise the majority of the blood-stage infection burden, with implications for the acquisition of immunity and the distribution of superinfection. Here, we construct a hybrid transmission model for P. vivax that concurrently accounts for the accrual of the hypnozoite reservoir, (blood-stage) superinfection and the acquisition of immunity. We begin by analytically characterising within-host dynamics as a function of mosquito-to-human transmission intensity, extending our previous model (comprising an open network of infinite server queues) to capture a discretised immunity level. To model transmission-blocking and antidisease immunity, we allow for geometric decay in the respective probabilities of successful human-to-mosquito transmission and symptomatic blood-stage infection as a function of this immunity level. Under a hybrid approximation -- whereby probabilistic within-host distributions are cast as expected population-level proportions -- we couple host and vector dynamics to recover a deterministic compartmental model in line with Ross-Macdonald theory. We then perform a steady-state analysis for this compartmental model, informed by the (analytic) distributions derived at the within-host level. To characterise transient dynamics, we derive a reduced system of integrodifferential equations (IDEs), likewise informed by our within-host queueing network, allowing us to recover population-level distributions for various quantities of epidemiological interest. Our model provides insights into important, but poorly understood, epidemiological features of P. vivax. |
2108.11064 | Mahmoud Hassan | Aya Kabbara, Gabriel Robert, Mohamad Khalil, Marc Verin, Pascal
Benquet, Mahmoud Hassan | An Electroencephalography connectome predictive model of major
depressive disorder severity | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Emerging evidence showed that major depressive disorder (MDD) is associated
with disruptions of brain structural and functional networks, rather than
impairment of isolated brain region. Thus, connectome-based models capable of
predicting the depression severity at the individual level can be clinically
useful. Here, we applied a machine-learning approach to predict the severity of
depression using resting-state networks derived from source-reconstructed
Electroencephalography (EEG) signals. Using regression models and three
independent EEG datasets (N=328), we tested whether resting state functional
connectivity could predict individual depression score. On the first dataset,
results showed that individuals scores could be reasonably predicted (r=0.61,
p=4 x 10-18) using intrinsic functional connectivity in the EEG alpha band
(8-13 Hz). In particular, the brain regions which contributed the most to the
predictive network belong to the default mode network. We further tested the
predictive potential of the established model by conducting two external
validations on (N1=53, N2=154). Results showed high significant correlations
between the predicted and the measured depression scale scores (r1= 0.49,
r2=0.37, p<0.001). These findings lay the foundation for developing a
generalizable and scientifically interpretable EEG network-based markers that
can ultimately support clinicians in a biologically-based characterization of
MDD.
| [
{
"created": "Wed, 25 Aug 2021 06:20:36 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Oct 2021 07:26:46 GMT",
"version": "v2"
}
] | 2021-10-18 | [
[
"Kabbara",
"Aya",
""
],
[
"Robert",
"Gabriel",
""
],
[
"Khalil",
"Mohamad",
""
],
[
"Verin",
"Marc",
""
],
[
"Benquet",
"Pascal",
""
],
[
"Hassan",
"Mahmoud",
""
]
] | Emerging evidence showed that major depressive disorder (MDD) is associated with disruptions of brain structural and functional networks, rather than impairment of isolated brain region. Thus, connectome-based models capable of predicting the depression severity at the individual level can be clinically useful. Here, we applied a machine-learning approach to predict the severity of depression using resting-state networks derived from source-reconstructed Electroencephalography (EEG) signals. Using regression models and three independent EEG datasets (N=328), we tested whether resting state functional connectivity could predict individual depression score. On the first dataset, results showed that individuals scores could be reasonably predicted (r=0.61, p=4 x 10-18) using intrinsic functional connectivity in the EEG alpha band (8-13 Hz). In particular, the brain regions which contributed the most to the predictive network belong to the default mode network. We further tested the predictive potential of the established model by conducting two external validations on (N1=53, N2=154). Results showed high significant correlations between the predicted and the measured depression scale scores (r1= 0.49, r2=0.37, p<0.001). These findings lay the foundation for developing a generalizable and scientifically interpretable EEG network-based markers that can ultimately support clinicians in a biologically-based characterization of MDD. |
1806.10639 | Vianey Leos Barajas | Vianey Leos-Barajas and Th\'eo Michelot | An Introduction to Animal Movement Modeling with Hidden Markov Models
using Stan for Bayesian Inference | 29 pages, 15 figures | null | null | null | q-bio.QM stat.AP | http://creativecommons.org/licenses/by/4.0/ | Hidden Markov models (HMMs) are popular time series model in many fields
including ecology, economics and genetics. HMMs can be defined over discrete or
continuous time, though here we only cover the former. In the field of movement
ecology in particular, HMMs have become a popular tool for the analysis of
movement data because of their ability to connect observed movement data to an
underlying latent process, generally interpreted as the animal's unobserved
behavior. Further, we model the tendency to persist in a given behavior over
time. Notation presented here will generally follow the format of Zucchini et
al. (2016) and cover HMMs applied in an unsupervised case to animal movement
data, specifically positional data. We provide Stan code to analyze movement
data of the wild haggis as presented first in Michelot et al. (2016).
| [
{
"created": "Wed, 27 Jun 2018 18:41:15 GMT",
"version": "v1"
}
] | 2018-06-29 | [
[
"Leos-Barajas",
"Vianey",
""
],
[
"Michelot",
"Théo",
""
]
] | Hidden Markov models (HMMs) are popular time series model in many fields including ecology, economics and genetics. HMMs can be defined over discrete or continuous time, though here we only cover the former. In the field of movement ecology in particular, HMMs have become a popular tool for the analysis of movement data because of their ability to connect observed movement data to an underlying latent process, generally interpreted as the animal's unobserved behavior. Further, we model the tendency to persist in a given behavior over time. Notation presented here will generally follow the format of Zucchini et al. (2016) and cover HMMs applied in an unsupervised case to animal movement data, specifically positional data. We provide Stan code to analyze movement data of the wild haggis as presented first in Michelot et al. (2016). |
2203.14964 | Jianhua Xing | Jianhua Xing | Reconstructing data-driven governing equations for cell phenotypic
transitions: integration of data science and systems biology | 38 pages, 3 figures | null | 10.1088/1478-3975/ac8c16 | null | q-bio.QM physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | Cells with the same genome can exist in different phenotypes. and can change
between distinct phenotypes when subject to specific stimuli and
microenvironments. Some examples include cell differentiation during
development, reprogramming for induced pluripotent stem cells and
transdifferentiation, cancer metastasis and fibrosis development. The
regulation and dynamics of cell phenotypic conversion is a fundamental problem
in biology, and has a long history of being studied within the formalism of
dynamical systems. A main challenge for mechanism-driven modeling studies is
acquiring sufficient amount of quantitative information for constraining model
parameters. Advances in quantitative approaches, especially high throughput
single-cell techniques, have accelerated the emergence of a new direction for
reconstructing the governing dynamical equations of a cellular system from
quantitative single-cell data, beyond the dominant statistical approaches. Here
I review a selected number of recent studies using live- and fixed-cell data
and provide my perspective on future development.
| [
{
"created": "Sat, 26 Mar 2022 18:24:44 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Aug 2022 12:19:06 GMT",
"version": "v2"
}
] | 2022-09-28 | [
[
"Xing",
"Jianhua",
""
]
] | Cells with the same genome can exist in different phenotypes. and can change between distinct phenotypes when subject to specific stimuli and microenvironments. Some examples include cell differentiation during development, reprogramming for induced pluripotent stem cells and transdifferentiation, cancer metastasis and fibrosis development. The regulation and dynamics of cell phenotypic conversion is a fundamental problem in biology, and has a long history of being studied within the formalism of dynamical systems. A main challenge for mechanism-driven modeling studies is acquiring sufficient amount of quantitative information for constraining model parameters. Advances in quantitative approaches, especially high throughput single-cell techniques, have accelerated the emergence of a new direction for reconstructing the governing dynamical equations of a cellular system from quantitative single-cell data, beyond the dominant statistical approaches. Here I review a selected number of recent studies using live- and fixed-cell data and provide my perspective on future development. |
0805.2288 | Marco Cosentino Lagomarsino | A.L. Sellerio, B. Bassetti, H. Isambert, M. Cosentino Lagomarsino | A comparative evolutionary study of transcription networks | null | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a comparative analysis of large-scale topological and evolutionary
properties of transcription networks in three species, the two distant bacteria
E. coli and B. subtilis, and the yeast S. cerevisiae. The study focuses on the
global aspects of feedback and hierarchy in transcriptional regulatory
pathways. While confirming that gene duplication has a significant impact on
the shaping of all the analyzed transcription networks, our results point to
distinct trends between the bacteria, where time constraints in the
transcription of downstream genes might be important in shaping the
hierarchical structure of the network, and yeast, which seems able to sustain a
higher wiring complexity, that includes the more feedback, intricate hierarchy,
and the combinatorial use of heterodimers made of duplicate transcription
factors.
| [
{
"created": "Thu, 15 May 2008 13:19:03 GMT",
"version": "v1"
}
] | 2008-05-16 | [
[
"Sellerio",
"A. L.",
""
],
[
"Bassetti",
"B.",
""
],
[
"Isambert",
"H.",
""
],
[
"Lagomarsino",
"M. Cosentino",
""
]
] | We present a comparative analysis of large-scale topological and evolutionary properties of transcription networks in three species, the two distant bacteria E. coli and B. subtilis, and the yeast S. cerevisiae. The study focuses on the global aspects of feedback and hierarchy in transcriptional regulatory pathways. While confirming that gene duplication has a significant impact on the shaping of all the analyzed transcription networks, our results point to distinct trends between the bacteria, where time constraints in the transcription of downstream genes might be important in shaping the hierarchical structure of the network, and yeast, which seems able to sustain a higher wiring complexity, that includes the more feedback, intricate hierarchy, and the combinatorial use of heterodimers made of duplicate transcription factors. |
2308.01362 | James Lu | Mark Laurie and James Lu | Explainable Deep Learning for Tumor Dynamic Modeling and Overall
Survival Prediction using Neural-ODE | 33 pages, 4 Figures and 2 Tables. Includes Supplementary Materials | null | null | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | While tumor dynamic modeling has been widely applied to support the
development of oncology drugs, there remains a need to increase predictivity,
enable personalized therapy, and improve decision-making. We propose the use of
Tumor Dynamic Neural-ODE (TDNODE) as a pharmacology-informed neural network to
enable model discovery from longitudinal tumor size data. We show that TDNODE
overcomes a key limitation of existing models in its ability to make unbiased
predictions from truncated data. The encoder-decoder architecture is designed
to express an underlying dynamical law which possesses the fundamental property
of generalized homogeneity with respect to time. Thus, the modeling formalism
enables the encoder output to be interpreted as kinetic rate metrics, with
inverse time as the physical unit. We show that the generated metrics can be
used to predict patients' overall survival (OS) with high accuracy. The
proposed modeling formalism provides a principled way to integrate multimodal
dynamical datasets in oncology disease modeling.
| [
{
"created": "Wed, 2 Aug 2023 18:08:27 GMT",
"version": "v1"
},
{
"created": "Sun, 15 Oct 2023 05:00:53 GMT",
"version": "v2"
},
{
"created": "Fri, 20 Oct 2023 20:10:10 GMT",
"version": "v3"
}
] | 2023-10-24 | [
[
"Laurie",
"Mark",
""
],
[
"Lu",
"James",
""
]
] | While tumor dynamic modeling has been widely applied to support the development of oncology drugs, there remains a need to increase predictivity, enable personalized therapy, and improve decision-making. We propose the use of Tumor Dynamic Neural-ODE (TDNODE) as a pharmacology-informed neural network to enable model discovery from longitudinal tumor size data. We show that TDNODE overcomes a key limitation of existing models in its ability to make unbiased predictions from truncated data. The encoder-decoder architecture is designed to express an underlying dynamical law which possesses the fundamental property of generalized homogeneity with respect to time. Thus, the modeling formalism enables the encoder output to be interpreted as kinetic rate metrics, with inverse time as the physical unit. We show that the generated metrics can be used to predict patients' overall survival (OS) with high accuracy. The proposed modeling formalism provides a principled way to integrate multimodal dynamical datasets in oncology disease modeling. |
1205.2318 | Soumen Roy | Soumen Roy | Systems biology beyond degree, hubs and scale-free networks: the case
for multiple metrics in complex networks | To appear in Systems and Synthetic Biology (Springer) | Systems and Synthetic Biology (2012) 6:31-34 | 10.1007/s11693-012-9094-y | null | q-bio.QM cond-mat.stat-mech cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modeling and topological analysis of networks in biological and other complex
systems, must venture beyond the limited consideration of very few network
metrics like degree, betweenness or assortativity. A proper identification of
informative and redundant entities from many different metrics, using recently
demonstrated techniques, is essential. A holistic comparison of networks and
growth models is best achieved only with the use of such methods.
| [
{
"created": "Thu, 10 May 2012 17:26:02 GMT",
"version": "v1"
},
{
"created": "Sun, 3 Jun 2012 07:32:22 GMT",
"version": "v2"
}
] | 2012-09-25 | [
[
"Roy",
"Soumen",
""
]
] | Modeling and topological analysis of networks in biological and other complex systems, must venture beyond the limited consideration of very few network metrics like degree, betweenness or assortativity. A proper identification of informative and redundant entities from many different metrics, using recently demonstrated techniques, is essential. A holistic comparison of networks and growth models is best achieved only with the use of such methods. |
1812.10050 | Moo K. Chung | Shih-Gu Huang, S. Balqis Samdin, Chee-Ming Ting, Hernando Ombao, Moo
K. Chung | Statistical Model for Dynamically-Changing Correlation Matrices with
Application to Brain Connectivity | Accepted for publication in Journal of Neuroscience Methods | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Recent studies have indicated that functional connectivity is
dynamic even during rest. A common approach to modeling the dynamic functional
connectivity in whole-brain resting-state fMRI is to compute the correlation
between anatomical regions via sliding time windows. However, the direct use of
the sample correlation matrices is not reliable due to the image acquisition
and processing noises in resting-sate fMRI.
New method: To overcome these limitations, we propose a new statistical model
that smooths out the noise by exploiting the geometric structure of correlation
matrices. The dynamic correlation matrix is modeled as a linear combination of
symmetric positive-definite matrices combined with cosine series
representation. The resulting smoothed dynamic correlation matrices are
clustered into disjoint brain connectivity states using the k-means clustering
algorithm.
Results: The proposed model preserves the geometric structure of underlying
physiological dynamic correlation, eliminates unwanted noise in connectivity
and obtains more accurate state spaces. The difference in the estimated dynamic
connectivity states between males and females is identified.
Comparison with existing methods: We demonstrate that the proposed
statistical model has less rapid state changes caused by noise and improves the
accuracy in identifying and discriminating different states.
Conclusions: We propose a new regression model on dynamically changing
correlation matrices that provides better performance over existing windowed
correlation and is more reliable for the modeling of dynamic connectivity.
| [
{
"created": "Tue, 25 Dec 2018 06:23:33 GMT",
"version": "v1"
},
{
"created": "Sun, 3 Nov 2019 16:28:17 GMT",
"version": "v2"
}
] | 2019-11-05 | [
[
"Huang",
"Shih-Gu",
""
],
[
"Samdin",
"S. Balqis",
""
],
[
"Ting",
"Chee-Ming",
""
],
[
"Ombao",
"Hernando",
""
],
[
"Chung",
"Moo K.",
""
]
] | Background: Recent studies have indicated that functional connectivity is dynamic even during rest. A common approach to modeling the dynamic functional connectivity in whole-brain resting-state fMRI is to compute the correlation between anatomical regions via sliding time windows. However, the direct use of the sample correlation matrices is not reliable due to the image acquisition and processing noises in resting-sate fMRI. New method: To overcome these limitations, we propose a new statistical model that smooths out the noise by exploiting the geometric structure of correlation matrices. The dynamic correlation matrix is modeled as a linear combination of symmetric positive-definite matrices combined with cosine series representation. The resulting smoothed dynamic correlation matrices are clustered into disjoint brain connectivity states using the k-means clustering algorithm. Results: The proposed model preserves the geometric structure of underlying physiological dynamic correlation, eliminates unwanted noise in connectivity and obtains more accurate state spaces. The difference in the estimated dynamic connectivity states between males and females is identified. Comparison with existing methods: We demonstrate that the proposed statistical model has less rapid state changes caused by noise and improves the accuracy in identifying and discriminating different states. Conclusions: We propose a new regression model on dynamically changing correlation matrices that provides better performance over existing windowed correlation and is more reliable for the modeling of dynamic connectivity. |
2307.00635 | Jacob Luber | Jai Prakash Veerla, Jillur Rahman Saurav, Michael Robben, Jacob M
Luber | Analyzing Lack of Concordance Between the Proteome and Transcriptome in
Paired scRNA-Seq and Multiplexed Spatial Proteomics | null | null | null | null | q-bio.TO q-bio.GN | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this study, we analyze discordance between the transcriptome and proteome
using paired scRNA-Seq and multiplexed spatial proteomics data from HuBMAP. Our
findings highlight persistent transcripts in key immune markers, including
CD45-RO, Ki67, CD45, CD20, and HLA-DR. CD45-RO is consistently expressed in
memory T cells, while Ki67, associated with cell proliferation, also displays
sustained expression. Furthermore, HLA-DR, part of the MHC class II molecules,
demonstrates continuous expression, possibly crucial for APCs to trigger an
effective immune response. This investigation provides novel insights into the
complexity of gene expression regulation and protein function.
| [
{
"created": "Sun, 2 Jul 2023 18:43:03 GMT",
"version": "v1"
}
] | 2023-07-04 | [
[
"Veerla",
"Jai Prakash",
""
],
[
"Saurav",
"Jillur Rahman",
""
],
[
"Robben",
"Michael",
""
],
[
"Luber",
"Jacob M",
""
]
] | In this study, we analyze discordance between the transcriptome and proteome using paired scRNA-Seq and multiplexed spatial proteomics data from HuBMAP. Our findings highlight persistent transcripts in key immune markers, including CD45-RO, Ki67, CD45, CD20, and HLA-DR. CD45-RO is consistently expressed in memory T cells, while Ki67, associated with cell proliferation, also displays sustained expression. Furthermore, HLA-DR, part of the MHC class II molecules, demonstrates continuous expression, possibly crucial for APCs to trigger an effective immune response. This investigation provides novel insights into the complexity of gene expression regulation and protein function. |
2309.04057 | Mitchel Colebank | Mitchel J. Colebank, Naomi C. Chesler | Efficient Uncertainty Quantification in a Multiscale Model of Pulmonary
Arterial and Venous Hemodynamics | 10 Figures, 2 tables | null | null | null | q-bio.TO physics.med-ph | http://creativecommons.org/licenses/by-sa/4.0/ | Computational hemodynamics models are becoming increasingly useful in the
management and prognosis of complex, multiscale pathologies, including those
attributed to the development of pulmonary vascular disease. However, diseases
like pulmonary hypertension are heterogeneous, and affect both the proximal
arteries and veins as well as the microcirculation. Simulation tools and the
data used for model calibration are also inherently uncertain, requiring a full
analysis of the sensitivity and uncertainty attributed to model inputs and
outputs. Thus, this study quantifies model sensitivity and output uncertainty
in a multiscale, pulse-wave propagation model of pulmonary hemodynamics. Our
pulmonary circuit model consists of fifteen proximal arteries and twelve
proximal veins, connected by a two-sided, structured tree model of the distal
vasculature. We use polynomial chaos expansions to expedite the sensitivity and
uncertainty quantification analyses and provide results for both the proximal
and distal vasculature. Our analyses provide uncertainty in blood pressure,
flow, and wave propagation phenomenon, as well as wall shear stress and cyclic
stretch, both of which are important stimuli for endothelial cell
mechanotransduction. We conclude that, while nearly all the parameters in our
system have some influence on model predictions, the parameters describing the
density of the microvascular beds have the largest effects on all simulated
quantities in both the proximal and distal circulation.
| [
{
"created": "Fri, 8 Sep 2023 01:09:17 GMT",
"version": "v1"
}
] | 2023-09-11 | [
[
"Colebank",
"Mitchel J.",
""
],
[
"Chesler",
"Naomi C.",
""
]
] | Computational hemodynamics models are becoming increasingly useful in the management and prognosis of complex, multiscale pathologies, including those attributed to the development of pulmonary vascular disease. However, diseases like pulmonary hypertension are heterogeneous, and affect both the proximal arteries and veins as well as the microcirculation. Simulation tools and the data used for model calibration are also inherently uncertain, requiring a full analysis of the sensitivity and uncertainty attributed to model inputs and outputs. Thus, this study quantifies model sensitivity and output uncertainty in a multiscale, pulse-wave propagation model of pulmonary hemodynamics. Our pulmonary circuit model consists of fifteen proximal arteries and twelve proximal veins, connected by a two-sided, structured tree model of the distal vasculature. We use polynomial chaos expansions to expedite the sensitivity and uncertainty quantification analyses and provide results for both the proximal and distal vasculature. Our analyses provide uncertainty in blood pressure, flow, and wave propagation phenomenon, as well as wall shear stress and cyclic stretch, both of which are important stimuli for endothelial cell mechanotransduction. We conclude that, while nearly all the parameters in our system have some influence on model predictions, the parameters describing the density of the microvascular beds have the largest effects on all simulated quantities in both the proximal and distal circulation. |
1511.00673 | Karol Bacik | Karol A. Bacik, Michael T. Schaub, Mariano Beguerisse-D\'iaz, Yazan N.
Billeh, Mauricio Barahona | Flow-based network analysis of the Caenorhabditis elegans connectome | 28 pages including Supplementary Information, 8 figures and 12
figures in the SI | PLoS Comput Biol 12.8 (2016): e1005055 | 10.1371/journal.pcbi.1005055 | null | q-bio.NC physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We exploit flow propagation on the directed neuronal network of the nematode
Caenorhabditis elegans to reveal dynamically relevant features of its
connectome. We find flow-based groupings of neurons at different levels of
granularity, which we relate to functional and anatomical constituents of its
nervous system. A systematic in silico evaluation of the full set of single and
double neuron ablations is used to identify deletions that induce the most
severe disruptions of the multi-resolution flow structure. Such ablations are
linked to functionally relevant neurons, and suggest potential candidates for
further in vivo investigation. In addition, we use the directional patterns of
incoming and outgoing network flows at all scales to identify flow profiles for
the neurons in the connectome, without pre-imposing a priori categories. The
four flow roles identified are linked to signal propagation motivated by
biological input-response scenarios.
| [
{
"created": "Mon, 2 Nov 2015 20:50:27 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Dec 2015 20:40:23 GMT",
"version": "v2"
},
{
"created": "Mon, 8 Aug 2016 10:07:59 GMT",
"version": "v3"
}
] | 2016-08-09 | [
[
"Bacik",
"Karol A.",
""
],
[
"Schaub",
"Michael T.",
""
],
[
"Beguerisse-Díaz",
"Mariano",
""
],
[
"Billeh",
"Yazan N.",
""
],
[
"Barahona",
"Mauricio",
""
]
] | We exploit flow propagation on the directed neuronal network of the nematode Caenorhabditis elegans to reveal dynamically relevant features of its connectome. We find flow-based groupings of neurons at different levels of granularity, which we relate to functional and anatomical constituents of its nervous system. A systematic in silico evaluation of the full set of single and double neuron ablations is used to identify deletions that induce the most severe disruptions of the multi-resolution flow structure. Such ablations are linked to functionally relevant neurons, and suggest potential candidates for further in vivo investigation. In addition, we use the directional patterns of incoming and outgoing network flows at all scales to identify flow profiles for the neurons in the connectome, without pre-imposing a priori categories. The four flow roles identified are linked to signal propagation motivated by biological input-response scenarios. |
1304.7393 | Mark Flegg | Mark B Flegg, Stefan Hellander, Radek Erban | Convergence of methods for coupling of microscopic and mesoscopic
reaction-diffusion simulations | null | null | null | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, three multiscale methods for coupling of mesoscopic
(compartment-based) and microscopic (molecular-based) stochastic
reaction-diffusion simulations are investigated. Two of the three methods that
will be discussed in detail have been previously reported in the literature;
the two-regime method (TRM) and the compartment-placement method (CPM). The
third method that is introduced and analysed in this paper is the ghost cell
method (GCM). Presented is a comparison of sources of error. The convergent
properties of this error are studied as the time step $\Delta t$ (for updating
the molecular-based part of the model) approaches zero. It is found that the
error behaviour depends on another fundamental computational parameter $h$, the
compartment size in the mesoscopic part of the model. Two important limiting
cases, which appear in applications, are considered: (i) \Delta t approaches 0
and h is fixed; and (ii) \Delta t approaches 0 and h approaches 0 such that
\Delta t/h^2 is fixed. The error for previously developed approaches (the TRM
and CPM) converges to zero only in the limiting case (ii), but not in case (i).
It is shown that the error of the GCM converges in the limiting case (i). Thus
the GCM is superior to previous coupling techniques if the mesoscopic
description is much coarser than the microscopic part of the model.
| [
{
"created": "Sat, 27 Apr 2013 18:06:39 GMT",
"version": "v1"
}
] | 2013-04-30 | [
[
"Flegg",
"Mark B",
""
],
[
"Hellander",
"Stefan",
""
],
[
"Erban",
"Radek",
""
]
] | In this paper, three multiscale methods for coupling of mesoscopic (compartment-based) and microscopic (molecular-based) stochastic reaction-diffusion simulations are investigated. Two of the three methods that will be discussed in detail have been previously reported in the literature; the two-regime method (TRM) and the compartment-placement method (CPM). The third method that is introduced and analysed in this paper is the ghost cell method (GCM). Presented is a comparison of sources of error. The convergent properties of this error are studied as the time step $\Delta t$ (for updating the molecular-based part of the model) approaches zero. It is found that the error behaviour depends on another fundamental computational parameter $h$, the compartment size in the mesoscopic part of the model. Two important limiting cases, which appear in applications, are considered: (i) \Delta t approaches 0 and h is fixed; and (ii) \Delta t approaches 0 and h approaches 0 such that \Delta t/h^2 is fixed. The error for previously developed approaches (the TRM and CPM) converges to zero only in the limiting case (ii), but not in case (i). It is shown that the error of the GCM converges in the limiting case (i). Thus the GCM is superior to previous coupling techniques if the mesoscopic description is much coarser than the microscopic part of the model. |
2306.06296 | Michael Shvartsman | Michael Shvartsman, Benjamin Letham, Stephen Keeley | Response Time Improves Choice Prediction and Function Estimation for
Gaussian Process Models of Perception and Preferences | 18 pages incl. references and supplement; 11 figures | null | null | null | q-bio.NC cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Models for human choice prediction in preference learning and psychophysics
often consider only binary response data, requiring many samples to accurately
learn preferences or perceptual detection thresholds. The response time (RT) to
make each choice captures additional information about the decision process,
however existing models incorporating RTs for choice prediction do so in fully
parametric settings or over discrete stimulus sets. This is in part because the
de-facto standard model for choice RTs, the diffusion decision model (DDM),
does not admit tractable, differentiable inference. The DDM thus cannot be
easily integrated with flexible models for continuous, multivariate function
approximation, particularly Gaussian process (GP) models. We propose a novel
differentiable approximation to the DDM likelihood using a family of known,
skewed three-parameter distributions. We then use this new likelihood to
incorporate RTs into GP models for binary choices. Our RT-choice GPs enable
both better latent value estimation and held-out choice prediction relative to
baselines, which we demonstrate on three real-world multivariate datasets
covering both human psychophysics and preference learning applications.
| [
{
"created": "Fri, 9 Jun 2023 23:22:49 GMT",
"version": "v1"
}
] | 2023-06-13 | [
[
"Shvartsman",
"Michael",
""
],
[
"Letham",
"Benjamin",
""
],
[
"Keeley",
"Stephen",
""
]
] | Models for human choice prediction in preference learning and psychophysics often consider only binary response data, requiring many samples to accurately learn preferences or perceptual detection thresholds. The response time (RT) to make each choice captures additional information about the decision process, however existing models incorporating RTs for choice prediction do so in fully parametric settings or over discrete stimulus sets. This is in part because the de-facto standard model for choice RTs, the diffusion decision model (DDM), does not admit tractable, differentiable inference. The DDM thus cannot be easily integrated with flexible models for continuous, multivariate function approximation, particularly Gaussian process (GP) models. We propose a novel differentiable approximation to the DDM likelihood using a family of known, skewed three-parameter distributions. We then use this new likelihood to incorporate RTs into GP models for binary choices. Our RT-choice GPs enable both better latent value estimation and held-out choice prediction relative to baselines, which we demonstrate on three real-world multivariate datasets covering both human psychophysics and preference learning applications. |
1212.4745 | M. Angeles Serrano | M. \'Angeles Serrano, Manuel Jurado, Ramon Reigada | Negative-feedback self-regulation contributes to robust and
high-fidelity transmembrane signal transduction | null | null | null | null | q-bio.CB q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a minimal motif model for transmembrane cell signaling. The model
assumes signaling events taking place in spatially distributed nanoclusters
regulated by a birth/death dynamics. The combination of these spatio-temporal
aspects can be modulated to provide a robust and high-fidelity response
behavior without invoking sophisticated modeling of the signaling process as a
sequence of cascade reactions and fine-tuned parameters. Our results show that
the fact that the distributed signaling events take place in nanoclusters with
a finite lifetime regulated by local production is sufficient to obtain a
robust and high-fidelity response.
| [
{
"created": "Wed, 19 Dec 2012 17:06:07 GMT",
"version": "v1"
},
{
"created": "Sat, 20 Apr 2013 08:07:17 GMT",
"version": "v2"
}
] | 2013-04-23 | [
[
"Serrano",
"M. Ángeles",
""
],
[
"Jurado",
"Manuel",
""
],
[
"Reigada",
"Ramon",
""
]
] | We present a minimal motif model for transmembrane cell signaling. The model assumes signaling events taking place in spatially distributed nanoclusters regulated by a birth/death dynamics. The combination of these spatio-temporal aspects can be modulated to provide a robust and high-fidelity response behavior without invoking sophisticated modeling of the signaling process as a sequence of cascade reactions and fine-tuned parameters. Our results show that the fact that the distributed signaling events take place in nanoclusters with a finite lifetime regulated by local production is sufficient to obtain a robust and high-fidelity response. |
1304.8009 | Sergiy Perepelytsya | S.M. Perepelytsya, S.N. Volkov | Dynamics of ion-phosphate lattice of DNA in left-handed double helix
form | 12 pages, 3 figures | Ukr. J. Phys. V. 58, 554-562 (2013) | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The conformational vibrations of Z-DNA with counterions are studied in
framework of phenomenological model developed. The structure of left-handed
double helix with counterions neutralizing the negatively charged phosphate
groups of DNA is considered as the ion-phosphate lattice. The frequencies and
Raman intensities for the modes of Z-DNA with Na+ and Mg2+ ions are calculated,
and the low-frequency Raman spectra are built. At the spectra range about the
frequency 150 cm-1 new mode of ion-phosphate vibrations is found, which
characterizes the vibrations of Mg2+ counterions. The results of our
calculations show that the intensities of Z-DNA modes are sensitive to the
concentration of magnesium counterions. The obtained results describe well the
experimental Raman spectra of Z-DNA.
| [
{
"created": "Tue, 30 Apr 2013 14:27:08 GMT",
"version": "v1"
}
] | 2013-05-01 | [
[
"Perepelytsya",
"S. M.",
""
],
[
"Volkov",
"S. N.",
""
]
] | The conformational vibrations of Z-DNA with counterions are studied in framework of phenomenological model developed. The structure of left-handed double helix with counterions neutralizing the negatively charged phosphate groups of DNA is considered as the ion-phosphate lattice. The frequencies and Raman intensities for the modes of Z-DNA with Na+ and Mg2+ ions are calculated, and the low-frequency Raman spectra are built. At the spectra range about the frequency 150 cm-1 new mode of ion-phosphate vibrations is found, which characterizes the vibrations of Mg2+ counterions. The results of our calculations show that the intensities of Z-DNA modes are sensitive to the concentration of magnesium counterions. The obtained results describe well the experimental Raman spectra of Z-DNA. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.