id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
q-bio/0412046 | Wei-Mou Zheng | Wei-Mou Zheng, Xin Liu | A protein structural alphabet and its substitution matrix CLESUM | 10 pages | null | null | null | q-bio.BM | null | By using a mixture model for the density distribution of the three pseudobond
angles formed by $C_\alpha$ atoms of four consecutive residues, the local
structural states are discretized as 17 conformational letters of a protein
structural alphabet. This coarse-graining procedure converts a 3D structure to
a 1D code sequence. A substitution matrix between these letters is constructed
based on the structural alignments of the FSSP database.
| [
{
"created": "Tue, 28 Dec 2004 00:52:18 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Zheng",
"Wei-Mou",
""
],
[
"Liu",
"Xin",
""
]
] | By using a mixture model for the density distribution of the three pseudobond angles formed by $C_\alpha$ atoms of four consecutive residues, the local structural states are discretized as 17 conformational letters of a protein structural alphabet. This coarse-graining procedure converts a 3D structure to a 1D code sequence. A substitution matrix between these letters is constructed based on the structural alignments of the FSSP database. |
2212.06772 | Cameron Mura | Cameron Mura, Emma Candelier, Lei Xie | A Tribute to Phil Bourne -- Scientist and Human | 5 pages, 1 figure | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by/4.0/ | This Special Issue of Biomolecules, commissioned in honor of Dr. Philip E.
Bourne, focuses on a new field of biomolecular data science. In this brief
retrospective, we consider the arc of Bourne's 40-year scientific and
professional career, particularly as it relates to the origins of this new
field.
| [
{
"created": "Thu, 8 Dec 2022 21:29:09 GMT",
"version": "v1"
}
] | 2022-12-14 | [
[
"Mura",
"Cameron",
""
],
[
"Candelier",
"Emma",
""
],
[
"Xie",
"Lei",
""
]
] | This Special Issue of Biomolecules, commissioned in honor of Dr. Philip E. Bourne, focuses on a new field of biomolecular data science. In this brief retrospective, we consider the arc of Bourne's 40-year scientific and professional career, particularly as it relates to the origins of this new field. |
1409.3261 | Edwin Wang Dr. | Naif Zaman, Lei Li, Maria Jaramillo, Zhanpeng Sun, Chabane Tibiche,
Myriam Banville, Catherine Collins, Mark Trifiro, Miltiadis Paliouras, Andre
Nantel, Maureen OConnor-McCourt, Edwin Wang | Signaling Network Assessment of Mutations and Copy Number Variations
Predicts Breast Cancer Subtype-specific Drug Targets | 4 figs, more related papers at http://www.cancer-systemsbiology.org,
appears in Cell Reports, 2013 | null | 10.1016/j.celrep.2013.08.028 | null | q-bio.MN | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Individual cancer cells carry a bewildering number of distinct genomic
alterations i.e., copy number variations and mutations, making it a challenge
to uncover genomic-driven mechanisms governing tumorigenesis. Here we performed
exome-sequencing on several breast cancer cell lines which represent two
subtypes, luminal and basal. We integrated this sequencing data, and functional
RNAi screening data (i.e., for identifying genes which are essential for cell
proliferation and survival), onto a human signaling network. Two
subtype-specific networks were identified, which potentially represent
core-signaling mechanisms underlying tumorigenesis. Within both networks, we
found that genes were differentially affected in different cell lines; i.e., in
some cell lines a gene was identified through RNAi screening whereas in others
it was genomically altered. Interestingly, we found that highly connected
network genes could be used to correctly classify breast tumors into subtypes
based on genomic alterations. Further, the networks effectively predicted
subtype-specific drug targets, which were experimentally validated.
| [
{
"created": "Wed, 10 Sep 2014 21:37:22 GMT",
"version": "v1"
}
] | 2014-09-12 | [
[
"Zaman",
"Naif",
""
],
[
"Li",
"Lei",
""
],
[
"Jaramillo",
"Maria",
""
],
[
"Sun",
"Zhanpeng",
""
],
[
"Tibiche",
"Chabane",
""
],
[
"Banville",
"Myriam",
""
],
[
"Collins",
"Catherine",
""
],
[
"Tr... | Individual cancer cells carry a bewildering number of distinct genomic alterations i.e., copy number variations and mutations, making it a challenge to uncover genomic-driven mechanisms governing tumorigenesis. Here we performed exome-sequencing on several breast cancer cell lines which represent two subtypes, luminal and basal. We integrated this sequencing data, and functional RNAi screening data (i.e., for identifying genes which are essential for cell proliferation and survival), onto a human signaling network. Two subtype-specific networks were identified, which potentially represent core-signaling mechanisms underlying tumorigenesis. Within both networks, we found that genes were differentially affected in different cell lines; i.e., in some cell lines a gene was identified through RNAi screening whereas in others it was genomically altered. Interestingly, we found that highly connected network genes could be used to correctly classify breast tumors into subtypes based on genomic alterations. Further, the networks effectively predicted subtype-specific drug targets, which were experimentally validated. |
1502.02804 | Francis Eustache | Baptiste Fauvel, Groussard Mathilde, Mutlu Justine, Arenaza-Urquijo
Eider M., Eustache Francis, Desgranges B\'eatrice, Platel Herv\'e | Musical practice and cognitive aging: two cross-sectional studies point
to phonemic fluency as a potential candidate for a use-dependent adaptation | null | frontiers in AGING NEUROSCIENCE, The University of Texas Health
Science Center at Houston, USA, 2014, 6 (227), pp.1-12 | 10.3389/fnagi.2014.00227. eCollection 2014 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Because of permanent use-dependent brain plasticity, all lifelong
individuals' experiences are believed to influence the cognitive aging quality.
In older individuals, both former and current musical practices have been
associated with better verbal skills, visual memory, processing speed, and
planning function. This work sought for an interaction between musical practice
and cognitive aging by comparing musician and non-musician individuals for two
lifetime periods (middle and late adulthood). Long-term memory, auditory-verbal
short-term memory, processing speed, non-verbal reasoning, and verbal fluencies
were assessed. In Study 1, measures of processing speed and auditory-verbal
short-term memory were significantly better performed by musicians compared
with controls, but both groups displayed the same age-related differences. For
verbal fluencies, musicians scored higher than controls and displayed different
age effects. In Study 2, we found that lifetime period at training onset
(childhood vs. adulthood) was associated with phonemic, but not semantic,
fluency performances (musicians who had started to practice in adulthood did
not perform better on phonemic fluency than non-musicians). Current frequency
of training did not account for musicians' scores on either of these two
measures. These patterns of results are discussed by setting the hypothesis of
a transformative effect of musical practice against a non-causal explanation.
| [
{
"created": "Tue, 10 Feb 2015 07:43:08 GMT",
"version": "v1"
}
] | 2015-02-11 | [
[
"Fauvel",
"Baptiste",
""
],
[
"Mathilde",
"Groussard",
""
],
[
"Justine",
"Mutlu",
""
],
[
"M.",
"Arenaza-Urquijo Eider",
""
],
[
"Francis",
"Eustache",
""
],
[
"Béatrice",
"Desgranges",
""
],
[
"Hervé",
"Plate... | Because of permanent use-dependent brain plasticity, all lifelong individuals' experiences are believed to influence the cognitive aging quality. In older individuals, both former and current musical practices have been associated with better verbal skills, visual memory, processing speed, and planning function. This work sought for an interaction between musical practice and cognitive aging by comparing musician and non-musician individuals for two lifetime periods (middle and late adulthood). Long-term memory, auditory-verbal short-term memory, processing speed, non-verbal reasoning, and verbal fluencies were assessed. In Study 1, measures of processing speed and auditory-verbal short-term memory were significantly better performed by musicians compared with controls, but both groups displayed the same age-related differences. For verbal fluencies, musicians scored higher than controls and displayed different age effects. In Study 2, we found that lifetime period at training onset (childhood vs. adulthood) was associated with phonemic, but not semantic, fluency performances (musicians who had started to practice in adulthood did not perform better on phonemic fluency than non-musicians). Current frequency of training did not account for musicians' scores on either of these two measures. These patterns of results are discussed by setting the hypothesis of a transformative effect of musical practice against a non-causal explanation. |
1504.05884 | Joan Saldana | Jordi Ripoll, Albert Aviny\'o, Marta Pellicer, Joan Salda\~na | Impact of density-dependent migration flows on epidemic outbreaks in
heterogeneous metapopulations | 6 pages, 3 figures | Phys. Rev. E 92, 022809 (2015) | 10.1103/PhysRevE.92.022809 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the role of migration patterns on the spread of epidemics in
complex networks. We enhance the SIS-diffusion model on metapopulations to a
nonlinear diffusion. Specifically, individuals move randomly over the network
but at a rate depending on the population of the departure patch. In the
absence of epidemics, the migration-driven equilibrium is described by
quantifying the total number of individuals living in heavily/lightly populated
areas. Our analytical approach reveals that strengthening the migration from
populous areas contains the infection at the early stage of the epidemic.
Moreover, depending on the exponent of the nonlinear diffusion rate, epidemic
outbreaks do not always occur in the most populated areas as one might expect.
| [
{
"created": "Wed, 22 Apr 2015 17:01:17 GMT",
"version": "v1"
},
{
"created": "Wed, 29 Jul 2015 14:43:49 GMT",
"version": "v2"
}
] | 2016-10-19 | [
[
"Ripoll",
"Jordi",
""
],
[
"Avinyó",
"Albert",
""
],
[
"Pellicer",
"Marta",
""
],
[
"Saldaña",
"Joan",
""
]
] | We investigate the role of migration patterns on the spread of epidemics in complex networks. We enhance the SIS-diffusion model on metapopulations to a nonlinear diffusion. Specifically, individuals move randomly over the network but at a rate depending on the population of the departure patch. In the absence of epidemics, the migration-driven equilibrium is described by quantifying the total number of individuals living in heavily/lightly populated areas. Our analytical approach reveals that strengthening the migration from populous areas contains the infection at the early stage of the epidemic. Moreover, depending on the exponent of the nonlinear diffusion rate, epidemic outbreaks do not always occur in the most populated areas as one might expect. |
1708.00556 | Andy Zhou | Andy Zhou, Samantha R. Santacruz, Benjamin C. Johnson, George
Alexandrov, Ali Moin, Fred L. Burghardt, Jan M. Rabaey, Jose M. Carmena,
Rikky Muller | WAND: A 128-channel, closed-loop, wireless artifact-free neuromodulation
device | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Closed-loop neuromodulation systems aim to treat a variety of neurological
conditions by dynamically delivering and adjusting therapeutic electrical
stimulation in response to a patient's neural state, recorded in real-time.
Existing systems are limited by low channel counts, lack of algorithmic
flexibility, and distortion of recorded signals from large, persistent
stimulation artifacts. Here, we describe a device that enables new research
applications requiring high-throughput data streaming, low-latency biosignal
processing, and truly simultaneous sensing and stimulation. The Wireless
Artifact-free Neuromodulation Device (WAND) is a miniaturized, wireless neural
interface capable of recording and stimulating on 128 channels with on-board
processing to fully cancel stimulation artifacts, detect neural biomarkers, and
automatically adjust stimulation parameters in a closed-loop fashion. It
combines custom application specific integrated circuits (ASICs), an on-board
FPGA, and a low-power bidirectional radio. We validate wireless, long-term
recordings of local field potentials (LFP) and real-time cancellation of
stimulation artifacts in a behaving nonhuman primate (NHP). We use WAND to
demonstrate a closed-loop stimulation paradigm to disrupt movement preparatory
activity during a delayed-reach task in a NHP in vivo. This wireless device,
leveraging custom ASICs for both neural recording and electrical stimulation
modalities, makes possible a neural interface platform technology to
significantly advance both neuroscientific discovery and preclinical
investigations of stimulation-based therapeutic interventions.
| [
{
"created": "Wed, 2 Aug 2017 00:12:05 GMT",
"version": "v1"
},
{
"created": "Sun, 3 Dec 2017 07:49:35 GMT",
"version": "v2"
},
{
"created": "Tue, 29 May 2018 16:54:34 GMT",
"version": "v3"
}
] | 2018-05-30 | [
[
"Zhou",
"Andy",
""
],
[
"Santacruz",
"Samantha R.",
""
],
[
"Johnson",
"Benjamin C.",
""
],
[
"Alexandrov",
"George",
""
],
[
"Moin",
"Ali",
""
],
[
"Burghardt",
"Fred L.",
""
],
[
"Rabaey",
"Jan M.",
""
... | Closed-loop neuromodulation systems aim to treat a variety of neurological conditions by dynamically delivering and adjusting therapeutic electrical stimulation in response to a patient's neural state, recorded in real-time. Existing systems are limited by low channel counts, lack of algorithmic flexibility, and distortion of recorded signals from large, persistent stimulation artifacts. Here, we describe a device that enables new research applications requiring high-throughput data streaming, low-latency biosignal processing, and truly simultaneous sensing and stimulation. The Wireless Artifact-free Neuromodulation Device (WAND) is a miniaturized, wireless neural interface capable of recording and stimulating on 128 channels with on-board processing to fully cancel stimulation artifacts, detect neural biomarkers, and automatically adjust stimulation parameters in a closed-loop fashion. It combines custom application specific integrated circuits (ASICs), an on-board FPGA, and a low-power bidirectional radio. We validate wireless, long-term recordings of local field potentials (LFP) and real-time cancellation of stimulation artifacts in a behaving nonhuman primate (NHP). We use WAND to demonstrate a closed-loop stimulation paradigm to disrupt movement preparatory activity during a delayed-reach task in a NHP in vivo. This wireless device, leveraging custom ASICs for both neural recording and electrical stimulation modalities, makes possible a neural interface platform technology to significantly advance both neuroscientific discovery and preclinical investigations of stimulation-based therapeutic interventions. |
1411.4582 | Bruna Jacobson | B. D. Jacobson, L. J. Herskowitz, S. J. Koch, S. R. Atlas | Analysis of kinesin mechanochemistry via simulated annealing | 27 pages, 9 figures, 2 tables | null | null | null | q-bio.BM q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The molecular motor protein kinesin plays a key role in fundamental cellular
processes such as intracellular transport, mitotic spindle formation, and
cytokinesis, with important implications for neurodegenerative and cancer
disease pathways. Recently, kinesin has been studied as a paradigm for the
tailored design of nano-bio sensor and other nanoscale systems. As it processes
along a microtubule within the cell, kinesin undergoes a cycle of chemical
state and physical conformation transitions that enable it to take ~100 regular
8.2-nm steps before ending its processive walk. Despite an extensive body of
experimental and theoretical work, a unified microscopic model of kinesin
mechanochemistry does not yet exist. Here we present a methodology that
optimizes a kinetic model for kinesin constructed with a minimum of a priori
assumptions about the underlying processive mechanism. Kinetic models are
preferred for numerical calculations since information about the kinesin
stepping mechanism at all levels, from the atomic to the microscopic scale, is
fully contained within the particular states of the cycle: how states
transition, and the rate constants associated with each transition. We combine
Markov chain calculations and simulated annealing optimization to determine the
rate constants that best fit experimental data on kinesin speed and
processivity.
| [
{
"created": "Mon, 17 Nov 2014 18:27:04 GMT",
"version": "v1"
}
] | 2014-11-18 | [
[
"Jacobson",
"B. D.",
""
],
[
"Herskowitz",
"L. J.",
""
],
[
"Koch",
"S. J.",
""
],
[
"Atlas",
"S. R.",
""
]
] | The molecular motor protein kinesin plays a key role in fundamental cellular processes such as intracellular transport, mitotic spindle formation, and cytokinesis, with important implications for neurodegenerative and cancer disease pathways. Recently, kinesin has been studied as a paradigm for the tailored design of nano-bio sensor and other nanoscale systems. As it processes along a microtubule within the cell, kinesin undergoes a cycle of chemical state and physical conformation transitions that enable it to take ~100 regular 8.2-nm steps before ending its processive walk. Despite an extensive body of experimental and theoretical work, a unified microscopic model of kinesin mechanochemistry does not yet exist. Here we present a methodology that optimizes a kinetic model for kinesin constructed with a minimum of a priori assumptions about the underlying processive mechanism. Kinetic models are preferred for numerical calculations since information about the kinesin stepping mechanism at all levels, from the atomic to the microscopic scale, is fully contained within the particular states of the cycle: how states transition, and the rate constants associated with each transition. We combine Markov chain calculations and simulated annealing optimization to determine the rate constants that best fit experimental data on kinesin speed and processivity. |
1912.12476 | Yang Shen | Yue Cao and Yang Shen | Energy-based Graph Convolutional Networks for Scoring Protein Docking
Models | null | Proteins: Structure, Function, and Bioinformatics 88, no. 8
(2020): 1091-1099 | 10.1002/prot.25888 | null | q-bio.BM cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Structural information about protein-protein interactions, often missing at
the interactome scale, is important for mechanistic understanding of cells and
rational discovery of therapeutics. Protein docking provides a computational
alternative to predict such information. However, ranking near-native docked
models high among a large number of candidates, often known as the scoring
problem, remains a critical challenge. Moreover, estimating model quality, also
known as the quality assessment problem, is rarely addressed in protein
docking.
In this study the two challenging problems in protein docking are regarded as
relative and absolute scoring, respectively, and addressed in one
physics-inspired deep learning framework. We represent proteins' and encounter
complexes' 3D structures as intra- and inter-molecular residue contact graphs
with atom-resolution node and edge features. And we propose a novel graph
convolutional kernel that pool interacting nodes' features through edge
features so that generalized interaction energies can be learned directly from
graph data. The resulting energy-based graph convolutional networks (EGCN) with
multi-head attention are trained to predict intra- and inter-molecular
energies, binding affinities, and quality measures (interface RMSD) for
encounter complexes. Compared to a state-of-the-art scoring function for model
ranking, EGCN has significantly improved ranking for a CAPRI test set involving
homology docking; and is comparable for Score_set, a CAPRI benchmark set
generated by diverse community-wide docking protocols not known to training
data. For Score_set quality assessment, EGCN shows about 27% improvement to our
previous efforts. Directly learning from 3D structure data in graph
representation, EGCN represents the first successful development of graph
convolutional networks for protein docking.
| [
{
"created": "Sat, 28 Dec 2019 15:57:17 GMT",
"version": "v1"
}
] | 2020-12-17 | [
[
"Cao",
"Yue",
""
],
[
"Shen",
"Yang",
""
]
] | Structural information about protein-protein interactions, often missing at the interactome scale, is important for mechanistic understanding of cells and rational discovery of therapeutics. Protein docking provides a computational alternative to predict such information. However, ranking near-native docked models high among a large number of candidates, often known as the scoring problem, remains a critical challenge. Moreover, estimating model quality, also known as the quality assessment problem, is rarely addressed in protein docking. In this study the two challenging problems in protein docking are regarded as relative and absolute scoring, respectively, and addressed in one physics-inspired deep learning framework. We represent proteins' and encounter complexes' 3D structures as intra- and inter-molecular residue contact graphs with atom-resolution node and edge features. And we propose a novel graph convolutional kernel that pool interacting nodes' features through edge features so that generalized interaction energies can be learned directly from graph data. The resulting energy-based graph convolutional networks (EGCN) with multi-head attention are trained to predict intra- and inter-molecular energies, binding affinities, and quality measures (interface RMSD) for encounter complexes. Compared to a state-of-the-art scoring function for model ranking, EGCN has significantly improved ranking for a CAPRI test set involving homology docking; and is comparable for Score_set, a CAPRI benchmark set generated by diverse community-wide docking protocols not known to training data. For Score_set quality assessment, EGCN shows about 27% improvement to our previous efforts. Directly learning from 3D structure data in graph representation, EGCN represents the first successful development of graph convolutional networks for protein docking. |
1005.0747 | Verena Wolf | Thomas A. Henzinger, Maria Mateescu, Linar Mikeev, Verena Wolf | Hybrid Numerical Solution of the Chemical Master Equation | 10 pages | null | null | null | q-bio.QM cs.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a numerical approximation technique for the analysis of
continuous-time Markov chains that describe networks of biochemical reactions
and play an important role in the stochastic modeling of biological systems.
Our approach is based on the construction of a stochastic hybrid model in which
certain discrete random variables of the original Markov chain are approximated
by continuous deterministic variables. We compute the solution of the
stochastic hybrid model using a numerical algorithm that discretizes time and
in each step performs a mutual update of the transient probability distribution
of the discrete stochastic variables and the values of the continuous
deterministic variables. We implemented the algorithm and we demonstrate its
usefulness and efficiency on several case studies from systems biology.
| [
{
"created": "Wed, 5 May 2010 13:19:13 GMT",
"version": "v1"
}
] | 2010-05-06 | [
[
"Henzinger",
"Thomas A.",
""
],
[
"Mateescu",
"Maria",
""
],
[
"Mikeev",
"Linar",
""
],
[
"Wolf",
"Verena",
""
]
] | We present a numerical approximation technique for the analysis of continuous-time Markov chains that describe networks of biochemical reactions and play an important role in the stochastic modeling of biological systems. Our approach is based on the construction of a stochastic hybrid model in which certain discrete random variables of the original Markov chain are approximated by continuous deterministic variables. We compute the solution of the stochastic hybrid model using a numerical algorithm that discretizes time and in each step performs a mutual update of the transient probability distribution of the discrete stochastic variables and the values of the continuous deterministic variables. We implemented the algorithm and we demonstrate its usefulness and efficiency on several case studies from systems biology. |
1610.01849 | Henning Dickten | Klaus Lehnertz and Henning Dickten | Assessing directionality and strength of coupling through symbolic
analysis: an application to epilepsy patients | null | Phil. Trans. R. Soc. A 373, 2034 (2015) | 10.1098/rsta.2014.0094 | null | q-bio.NC physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inferring strength and direction of interactions from electroencephalographic
(EEG) recordings is of crucial importance to improve our understanding of
dynamical interdependencies underlying various physiologic and pathophysiologic
conditions in the human epileptic brain. We here use approaches from symbolic
analysis to investigate---in a time-resolved manner---weighted and directed,
short- to long-ranged interactions between various brain regions constituting
the epileptic network. Our observations point to complex spatial-temporal
interdependencies underlying the epileptic process and their role in the
generation of epileptic seizures, despite the massive reduction of the complex
information content of multi-day, multi-channel EEG recordings through
symbolisation. We discuss limitations and potential future improvements of this
approach.
| [
{
"created": "Thu, 6 Oct 2016 12:57:49 GMT",
"version": "v1"
}
] | 2016-10-07 | [
[
"Lehnertz",
"Klaus",
""
],
[
"Dickten",
"Henning",
""
]
] | Inferring strength and direction of interactions from electroencephalographic (EEG) recordings is of crucial importance to improve our understanding of dynamical interdependencies underlying various physiologic and pathophysiologic conditions in the human epileptic brain. We here use approaches from symbolic analysis to investigate---in a time-resolved manner---weighted and directed, short- to long-ranged interactions between various brain regions constituting the epileptic network. Our observations point to complex spatial-temporal interdependencies underlying the epileptic process and their role in the generation of epileptic seizures, despite the massive reduction of the complex information content of multi-day, multi-channel EEG recordings through symbolisation. We discuss limitations and potential future improvements of this approach. |
1004.0873 | Viet Chi Tran | Vincent Bansaye (CMAP), Viet Chi Tran (CMAP, LPP) | Branching Feller diffusion for cell division with parasite infection | null | ALEA : Latin American Journal of Probability and Mathematical
Statistics 8 (2011) 95-127 | null | null | q-bio.PE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe the evolution of the quantity of parasites in a population of
cells which divide in continuous-time. The quantity of parasites in a cell
follows a Feller diffusion, which is splitted randomly between the two daughter
cells when a division occurs. The cell division rate may depend on the quantity
of parasites inside the cell and we are interested in the cases of constant or
monotone division rate. We first determine the asymptotic behavior of the
quantity of parasites in a cell line, which follows a Feller diffusion with
multiplicative jumps. We then consider the evolution of the infection of the
cell population and give criteria to determine whether the proportion of
infected cells goes to zero (recovery) or if a positive proportion of cells
becomes largely infected (proliferation of parasites inside the cells).
| [
{
"created": "Fri, 2 Apr 2010 10:48:19 GMT",
"version": "v1"
},
{
"created": "Sun, 30 May 2010 23:01:03 GMT",
"version": "v2"
},
{
"created": "Thu, 3 Jun 2010 08:10:27 GMT",
"version": "v3"
}
] | 2011-01-18 | [
[
"Bansaye",
"Vincent",
"",
"CMAP"
],
[
"Tran",
"Viet Chi",
"",
"CMAP, LPP"
]
] | We describe the evolution of the quantity of parasites in a population of cells which divide in continuous-time. The quantity of parasites in a cell follows a Feller diffusion, which is splitted randomly between the two daughter cells when a division occurs. The cell division rate may depend on the quantity of parasites inside the cell and we are interested in the cases of constant or monotone division rate. We first determine the asymptotic behavior of the quantity of parasites in a cell line, which follows a Feller diffusion with multiplicative jumps. We then consider the evolution of the infection of the cell population and give criteria to determine whether the proportion of infected cells goes to zero (recovery) or if a positive proportion of cells becomes largely infected (proliferation of parasites inside the cells). |
2402.18348 | Izaskun Mallona | Mark D. Robinson, Peiying Cai, Martin Emons, Reto Gerber, Pierre-Luc
Germain, Samuel Gunz, Siyuan Luo, Giulia Moro, Emanuel Sonder, Anthony
Sonrel, Jiayi Wang, David Wissel, Izaskun Mallona | Ten simple rules for collaborating with wet lab researchers for
computational researchers | 8 pages, 1 figure | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by-sa/4.0/ | Computational biologists are frequently engaged in collaborative data
analysis with wet lab researchers. These interdisciplinary projects, as
necessary as they are to the scientific endeavour, can be surprisingly
challenging due to cultural differences in operations and values. In these Ten
Simple Rules guide we aim to help dry lab researchers identify sources of
friction; and provide actionable tools to facilitate respectful, open,
transparent and rewarding collaborations.
| [
{
"created": "Tue, 27 Feb 2024 15:02:14 GMT",
"version": "v1"
}
] | 2024-02-29 | [
[
"Robinson",
"Mark D.",
""
],
[
"Cai",
"Peiying",
""
],
[
"Emons",
"Martin",
""
],
[
"Gerber",
"Reto",
""
],
[
"Germain",
"Pierre-Luc",
""
],
[
"Gunz",
"Samuel",
""
],
[
"Luo",
"Siyuan",
""
],
[
"Mor... | Computational biologists are frequently engaged in collaborative data analysis with wet lab researchers. These interdisciplinary projects, as necessary as they are to the scientific endeavour, can be surprisingly challenging due to cultural differences in operations and values. In these Ten Simple Rules guide we aim to help dry lab researchers identify sources of friction; and provide actionable tools to facilitate respectful, open, transparent and rewarding collaborations. |
1610.06945 | Yi-Xiang Wang | Yi Xiang J Wang, Lihong Zhang, Lin Zhao, Jian He, Xian-Jun Zeng, Heng
Liu, Yun-jun Yang, Shang-Wei Ding, Zhong-Fei Xu, Yong-Min He, Lin Yang, Lan
Sun, Ke-jie Mu, Bai-Song Wang, Xiao-Hong Xu, Zhong-You Ji, Jian-hua Liu,
Jin-Zhou Fang, Rui Hou, Feng Fan, Guang Ming Peng, Sheng-Hong Ju | Decreased aneurysmal subarachnoid hemorrhage incidence rate in elderly
population than in middle aged population: a retrospective analysis of 8,144
cases in Mainland China | Total 16 pages, 3 figures | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Purpose: Rupture of an intracranial aneurysm is the most common cause of
subarachnoid haemorrhage (SAH), which is a life-threatening acute
cerebrovascular event that typically affects working-age people. This study
aims to investigate the aneurysmal SAH incidence rate in elderly population
than in middle aged population in China. Materials and methods: Aneurysmal SAH
cases were collected retrospectively from the archives of 21 hospitals in
Mainland China. All the cases collected were from September 2016 and backward
consecutively for a period of time up to 8 years. SAH was initially diagnosed
by brain computed tomography, and CT angiography (CTA) or digital subtraction
angiography (DSA) was followed and SAH was confirmed to be due to cerebral
aneurysm. When for cases multiple bleeding occurred, the age of the first SAH
was used in this study. The toltal incidence from all hospital at each age were
summed together for females and males; then adjusted by the total population
number at each age for females and males. The total population data was from
the 2010 population census of the People's Republic of China. Results: In total
there were 8,144 cases, with 4,861 females and 3,283 males. Our analysis shows
for both females and males the relative aneurysmal SAH rate started to decrease
after around 65 years old. The males the relative aneurysmal SAH rate might
have started to decrease after around 55 years old. Conclusion: In contrast to
previous reports, our data demonstrated a decreased aneurysmal subarachnoid
hemorrhage incidence rate in elderly population than in middle aged population.
Our data therefore support the hypothesis that aneurysms do not grow
progressively once they form but probably either rupture or stabilize and that
very elderly patients are at a reduced risk of rupture compared with atients
who are younger with the same-sized aneurysms.
| [
{
"created": "Wed, 19 Oct 2016 16:21:51 GMT",
"version": "v1"
}
] | 2016-10-25 | [
[
"Wang",
"Yi Xiang J",
""
],
[
"Zhang",
"Lihong",
""
],
[
"Zhao",
"Lin",
""
],
[
"He",
"Jian",
""
],
[
"Zeng",
"Xian-Jun",
""
],
[
"Liu",
"Heng",
""
],
[
"Yang",
"Yun-jun",
""
],
[
"Ding",
"Shang... | Purpose: Rupture of an intracranial aneurysm is the most common cause of subarachnoid haemorrhage (SAH), which is a life-threatening acute cerebrovascular event that typically affects working-age people. This study aims to investigate the aneurysmal SAH incidence rate in elderly population than in middle aged population in China. Materials and methods: Aneurysmal SAH cases were collected retrospectively from the archives of 21 hospitals in Mainland China. All the cases collected were from September 2016 and backward consecutively for a period of time up to 8 years. SAH was initially diagnosed by brain computed tomography, and CT angiography (CTA) or digital subtraction angiography (DSA) was followed and SAH was confirmed to be due to cerebral aneurysm. When for cases multiple bleeding occurred, the age of the first SAH was used in this study. The toltal incidence from all hospital at each age were summed together for females and males; then adjusted by the total population number at each age for females and males. The total population data was from the 2010 population census of the People's Republic of China. Results: In total there were 8,144 cases, with 4,861 females and 3,283 males. Our analysis shows for both females and males the relative aneurysmal SAH rate started to decrease after around 65 years old. The males the relative aneurysmal SAH rate might have started to decrease after around 55 years old. Conclusion: In contrast to previous reports, our data demonstrated a decreased aneurysmal subarachnoid hemorrhage incidence rate in elderly population than in middle aged population. Our data therefore support the hypothesis that aneurysms do not grow progressively once they form but probably either rupture or stabilize and that very elderly patients are at a reduced risk of rupture compared with atients who are younger with the same-sized aneurysms. |
2210.09485 | Jiahui Chen | Jiahui Chen and Rui Wang and Yuta Hozumi and Gengzhuo Liu and Yuchi
Qiu and Xiaoqi Wei and Guo-Wei Wei | Emerging dominant SARS-CoV-2 variants | null | null | null | null | q-bio.PE q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | Accurate and reliable forecasting of emerging dominant severe acute
respiratory syndrome coronavirus 2 (SARS-CoV-2) variants enables policymakers
and vaccine makers to get prepared for future waves of infections. The last
three waves of SARS-CoV-2 infections caused by dominant variants Omicron
(BA.1), BA.2, and BA.4/BA.5 were accurately foretold by our artificial
intelligence (AI) models built with biophysics, genotyping of viral genomes,
experimental data, algebraic topology, and deep learning. Based on newly
available experimental data, we analyzed the impacts of all possible viral
spike (S) protein receptor-binding domain (RBD) mutations on the SARS-CoV-2
infectivity. Our analysis sheds light on viral evolutionary mechanisms, i.e.,
natural selection through infectivity strengthening and antibody resistance. We
forecast that BA.2.10.4, BA.2.75, BQ.1.1, and particularly, BA.2.75+R346T, have
high potential to become new dominant variants to drive the next surge.
| [
{
"created": "Tue, 18 Oct 2022 00:09:42 GMT",
"version": "v1"
}
] | 2022-10-19 | [
[
"Chen",
"Jiahui",
""
],
[
"Wang",
"Rui",
""
],
[
"Hozumi",
"Yuta",
""
],
[
"Liu",
"Gengzhuo",
""
],
[
"Qiu",
"Yuchi",
""
],
[
"Wei",
"Xiaoqi",
""
],
[
"Wei",
"Guo-Wei",
""
]
] | Accurate and reliable forecasting of emerging dominant severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) variants enables policymakers and vaccine makers to get prepared for future waves of infections. The last three waves of SARS-CoV-2 infections caused by dominant variants Omicron (BA.1), BA.2, and BA.4/BA.5 were accurately foretold by our artificial intelligence (AI) models built with biophysics, genotyping of viral genomes, experimental data, algebraic topology, and deep learning. Based on newly available experimental data, we analyzed the impacts of all possible viral spike (S) protein receptor-binding domain (RBD) mutations on the SARS-CoV-2 infectivity. Our analysis sheds light on viral evolutionary mechanisms, i.e., natural selection through infectivity strengthening and antibody resistance. We forecast that BA.2.10.4, BA.2.75, BQ.1.1, and particularly, BA.2.75+R346T, have high potential to become new dominant variants to drive the next surge. |
2103.11809 | Guillaume Achaz | Guillaume Achaz, Julien Dutheil | Correlated evolution: models and methods | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this article, we review the different models and methods to understand and
estimate correlated evolution within the same genome, individual or species. We
describe correlated evolution among traits, among genetic components and
finally between traits and genetics.
| [
{
"created": "Mon, 22 Mar 2021 13:11:12 GMT",
"version": "v1"
}
] | 2021-03-23 | [
[
"Achaz",
"Guillaume",
""
],
[
"Dutheil",
"Julien",
""
]
] | In this article, we review the different models and methods to understand and estimate correlated evolution within the same genome, individual or species. We describe correlated evolution among traits, among genetic components and finally between traits and genetics. |
1906.12004 | Asma Azizi | Xisohui Guo, Jun Chen, Asma Azizi, Jennifer Fewell, Yun Kang | Dynamics of Social Interactions and Agent Spreading in Social Insects
Colonies: Effects of Environmental Events and Spatial Heterogeneity | null | null | 10.1016/j.jtbi.2020.110191 | null | q-bio.PE math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The relationship between division of labor and individuals' spatial behavior
in social insect colonies provides a useful context to study how social
interactions influence the spreading of agent (which could be information or
virus) across distributed agent systems. In social insect colonies, spatial
heterogeneity associated with variations of individual task roles, affects
social contacts, and thus the way in which agent moves through social contact
networks. We used an Agent Based Model (ABM) to mimic three realistic scenarios
of agent spreading in social insect colonies. Our model suggests that
individuals within a specific task interact more with consequences that agent
could potentially spread rapidly within that group, while agent spreads slower
between task groups. Our simulations show a strong linear relationship between
the degree of spatial heterogeneity and social contact rates, and that the
spreading dynamics of agents follow a modified nonlinear logistic growth model
with varied transmission rates for different scenarios. Our work provides an
important insights on the dual-functionality of physical contacts. This
dual-functionality is often driven via variations of individual spatial
behavior, and can have both inhibiting and facilitating effects on agent
transmission rates depending on environment. The results from our proposed
model not only provide important insights on mechanisms that generate spatial
heterogeneity, but also deepen our understanding of how social insect colonies
balance the benefit and cost of physical contacts on the agents' transmission
under varied environmental conditions.
| [
{
"created": "Fri, 28 Jun 2019 00:36:18 GMT",
"version": "v1"
}
] | 2020-03-31 | [
[
"Guo",
"Xisohui",
""
],
[
"Chen",
"Jun",
""
],
[
"Azizi",
"Asma",
""
],
[
"Fewell",
"Jennifer",
""
],
[
"Kang",
"Yun",
""
]
] | The relationship between division of labor and individuals' spatial behavior in social insect colonies provides a useful context to study how social interactions influence the spreading of agent (which could be information or virus) across distributed agent systems. In social insect colonies, spatial heterogeneity associated with variations of individual task roles, affects social contacts, and thus the way in which agent moves through social contact networks. We used an Agent Based Model (ABM) to mimic three realistic scenarios of agent spreading in social insect colonies. Our model suggests that individuals within a specific task interact more with consequences that agent could potentially spread rapidly within that group, while agent spreads slower between task groups. Our simulations show a strong linear relationship between the degree of spatial heterogeneity and social contact rates, and that the spreading dynamics of agents follow a modified nonlinear logistic growth model with varied transmission rates for different scenarios. Our work provides an important insights on the dual-functionality of physical contacts. This dual-functionality is often driven via variations of individual spatial behavior, and can have both inhibiting and facilitating effects on agent transmission rates depending on environment. The results from our proposed model not only provide important insights on mechanisms that generate spatial heterogeneity, but also deepen our understanding of how social insect colonies balance the benefit and cost of physical contacts on the agents' transmission under varied environmental conditions. |
q-bio/0411004 | Graziano Vernizzi | G. Vernizzi, H. Orland, A. Zee | Enumeration of RNA structures by Matrix Models | RevTeX, 4 pages, 2 figures | null | 10.1103/PhysRevLett.94.168103 | SPhT-T04/135 | q-bio.BM cond-mat.soft | null | We enumerate the number of RNA contact structures according to their genus,
i.e. the topological character of their pseudoknots. By using a recently
proposed matrix model formulation for the RNA folding problem, we obtain exact
results for the simple case of an RNA molecule with an infinitely flexible
backbone, in which any arbitrary pair of bases is allowed. We analyze the
distribution of the genus of pseudoknots as a function of the total number of
nucleotides along the phosphate-sugar backbone.
| [
{
"created": "Sun, 31 Oct 2004 17:58:33 GMT",
"version": "v1"
}
] | 2009-11-10 | [
[
"Vernizzi",
"G.",
""
],
[
"Orland",
"H.",
""
],
[
"Zee",
"A.",
""
]
] | We enumerate the number of RNA contact structures according to their genus, i.e. the topological character of their pseudoknots. By using a recently proposed matrix model formulation for the RNA folding problem, we obtain exact results for the simple case of an RNA molecule with an infinitely flexible backbone, in which any arbitrary pair of bases is allowed. We analyze the distribution of the genus of pseudoknots as a function of the total number of nucleotides along the phosphate-sugar backbone. |
2310.15211 | Patrick Lawrence | Shunian Xiang, Patrick J. Lawrence, Bo Peng, ChienWei Chiang, Dokyoon
Kim, Li Shen, and Xia Ning | Modeling Path Importance for Effective Alzheimer's Disease Drug
Repurposing | 16 pages, 3 figures, 2 tables, 1 supplementary figure, 5
supplementary tables, Preprint of an article accepted for publication in
Pacific Symposium on Biocomputing \copyright 2023 World Scientific Publishing
Co., Singapore, http://psb.stanford.edu/ | null | null | null | q-bio.QM cs.AI cs.LG q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | Recently, drug repurposing has emerged as an effective and resource-efficient
paradigm for AD drug discovery. Among various methods for drug repurposing,
network-based methods have shown promising results as they are capable of
leveraging complex networks that integrate multiple interaction types, such as
protein-protein interactions, to more effectively identify candidate drugs.
However, existing approaches typically assume paths of the same length in the
network have equal importance in identifying the therapeutic effect of drugs.
Other domains have found that same length paths do not necessarily have the
same importance. Thus, relying on this assumption may be deleterious to drug
repurposing attempts. In this work, we propose MPI (Modeling Path Importance),
a novel network-based method for AD drug repurposing. MPI is unique in that it
prioritizes important paths via learned node embeddings, which can effectively
capture a network's rich structural information. Thus, leveraging learned
embeddings allows MPI to effectively differentiate the importance among paths.
We evaluate MPI against a commonly used baseline method that identifies anti-AD
drug candidates primarily based on the shortest paths between drugs and AD in
the network. We observe that among the top-50 ranked drugs, MPI prioritizes
20.0% more drugs with anti-AD evidence compared to the baseline. Finally, Cox
proportional-hazard models produced from insurance claims data aid us in
identifying the use of etodolac, nicotine, and BBB-crossing ACE-INHs as having
a reduced risk of AD, suggesting such drugs may be viable candidates for
repurposing and should be explored further in future studies.
| [
{
"created": "Mon, 23 Oct 2023 17:24:11 GMT",
"version": "v1"
},
{
"created": "Fri, 27 Oct 2023 16:29:44 GMT",
"version": "v2"
}
] | 2023-10-30 | [
[
"Xiang",
"Shunian",
""
],
[
"Lawrence",
"Patrick J.",
""
],
[
"Peng",
"Bo",
""
],
[
"Chiang",
"ChienWei",
""
],
[
"Kim",
"Dokyoon",
""
],
[
"Shen",
"Li",
""
],
[
"Ning",
"Xia",
""
]
] | Recently, drug repurposing has emerged as an effective and resource-efficient paradigm for AD drug discovery. Among various methods for drug repurposing, network-based methods have shown promising results as they are capable of leveraging complex networks that integrate multiple interaction types, such as protein-protein interactions, to more effectively identify candidate drugs. However, existing approaches typically assume paths of the same length in the network have equal importance in identifying the therapeutic effect of drugs. Other domains have found that same length paths do not necessarily have the same importance. Thus, relying on this assumption may be deleterious to drug repurposing attempts. In this work, we propose MPI (Modeling Path Importance), a novel network-based method for AD drug repurposing. MPI is unique in that it prioritizes important paths via learned node embeddings, which can effectively capture a network's rich structural information. Thus, leveraging learned embeddings allows MPI to effectively differentiate the importance among paths. We evaluate MPI against a commonly used baseline method that identifies anti-AD drug candidates primarily based on the shortest paths between drugs and AD in the network. We observe that among the top-50 ranked drugs, MPI prioritizes 20.0% more drugs with anti-AD evidence compared to the baseline. Finally, Cox proportional-hazard models produced from insurance claims data aid us in identifying the use of etodolac, nicotine, and BBB-crossing ACE-INHs as having a reduced risk of AD, suggesting such drugs may be viable candidates for repurposing and should be explored further in future studies. |
1804.08206 | Evan N. Feinberg | Amir Barati Farimani, Evan N. Feinberg, Vijay S. Pande | Binding Pathway of Opiates to $\mu$ Opioid Receptors Revealed by
Unsupervised Machine Learning | 25 pages, 8 figures | null | 10.1016/j.bpj.2017.11.390 | null | q-bio.BM q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many important analgesics relieve pain by binding to the $\mu$-Opioid
Receptor ($\mu$OR), which makes the $\mu$OR among the most clinically relevant
proteins of the G Protein Coupled Receptor (GPCR) family. Despite previous
studies on the activation pathways of the GPCRs, the mechanism of opiate
binding and the selectivity of $\mu$OR are largely unknown. We performed
extensive molecular dynamics (MD) simulation and analysis to find the selective
allosteric binding sites of the $\mu$OR and the path opiates take to bind to
the orthosteric site. In this study, we predicted that the allosteric site is
responsible for the attraction and selection of opiates. Using Markov state
models and machine learning, we traced the pathway of opiates in binding to the
orthosteric site, the main binding pocket. Our results have important
implications in designing novel analgesics.
| [
{
"created": "Mon, 23 Apr 2018 00:58:58 GMT",
"version": "v1"
}
] | 2018-05-02 | [
[
"Farimani",
"Amir Barati",
""
],
[
"Feinberg",
"Evan N.",
""
],
[
"Pande",
"Vijay S.",
""
]
] | Many important analgesics relieve pain by binding to the $\mu$-Opioid Receptor ($\mu$OR), which makes the $\mu$OR among the most clinically relevant proteins of the G Protein Coupled Receptor (GPCR) family. Despite previous studies on the activation pathways of the GPCRs, the mechanism of opiate binding and the selectivity of $\mu$OR are largely unknown. We performed extensive molecular dynamics (MD) simulation and analysis to find the selective allosteric binding sites of the $\mu$OR and the path opiates take to bind to the orthosteric site. In this study, we predicted that the allosteric site is responsible for the attraction and selection of opiates. Using Markov state models and machine learning, we traced the pathway of opiates in binding to the orthosteric site, the main binding pocket. Our results have important implications in designing novel analgesics. |
1805.01530 | Emiliano Di Cicco | Emiliano Di Cicco, Hugh W Ferguson, Karia H Kaukinen, Angela D
Schulze, Shaorong Li, Amy Tabata, Oliver P Gunther, Gideon Mordecai, Curtis A
Suttle, and Kristina M Miller | The same strain of Piscine orthoreovirus (PRV-1) is involved with the
development of different, but related, diseases in Atlantic and Pacific
Salmon in British Columbia | This is a journal authorized pre-release of an un-copyedited version
of an article accepted for publication in FACETS on 23 April 2018 (DOI
10.1139/facets-2018-0008). The final published version is expected to be
published online in May/June 2018 | null | 10.1139/facets-2018-0008 | null | q-bio.CB | http://creativecommons.org/licenses/by/4.0/ | Piscine orthoreovirus Strain PRV-1 is the causative agent of heart and
skeletal muscle inflammation (HSMI) in Atlantic salmon (Salmo salar). Given its
high prevalence in net pen salmon, debate has arisen on whether PRV poses a
risk to migratory salmon, especially in British Columbia (BC) where
commercially important wild Pacific salmon are in decline. Various strains of
PRV have been associated with diseases in Pacific salmon, including
erythrocytic inclusion body syndrome (EIBS), HSMI-like disease, and
jaundice/anemia in Japan, Norway, Chile and Canada. We examine the
developmental pathway of HSMI and jaundice/anemia associated with PRV-1 in
farmed Atlantic and Chinook (Oncorhynchus tshawytscha) salmon in BC,
respectively. In situ hybridization localized PRV-1 within developing lesions
in both diseases. The two diseases showed dissimilar pathological pathways,
with inflammatory lesions in heart and skeletal muscle in Atlantic salmon, and
degenerative-necrotic lesions in kidney and liver in Chinook salmon, plausibly
explained by differences in PRV load tolerance in red blood cells. Viral genome
sequencing revealed no consistent differences in PRV-1 variants intimately
involved in the development of both diseases, suggesting that migratory Chinook
salmon may be at more than a minimal risk of disease from exposure to the high
levels of PRV occurring on salmon farms.
| [
{
"created": "Thu, 3 May 2018 20:52:05 GMT",
"version": "v1"
}
] | 2018-05-07 | [
[
"Di Cicco",
"Emiliano",
""
],
[
"Ferguson",
"Hugh W",
""
],
[
"Kaukinen",
"Karia H",
""
],
[
"Schulze",
"Angela D",
""
],
[
"Li",
"Shaorong",
""
],
[
"Tabata",
"Amy",
""
],
[
"Gunther",
"Oliver P",
""
],
... | Piscine orthoreovirus Strain PRV-1 is the causative agent of heart and skeletal muscle inflammation (HSMI) in Atlantic salmon (Salmo salar). Given its high prevalence in net pen salmon, debate has arisen on whether PRV poses a risk to migratory salmon, especially in British Columbia (BC) where commercially important wild Pacific salmon are in decline. Various strains of PRV have been associated with diseases in Pacific salmon, including erythrocytic inclusion body syndrome (EIBS), HSMI-like disease, and jaundice/anemia in Japan, Norway, Chile and Canada. We examine the developmental pathway of HSMI and jaundice/anemia associated with PRV-1 in farmed Atlantic and Chinook (Oncorhynchus tshawytscha) salmon in BC, respectively. In situ hybridization localized PRV-1 within developing lesions in both diseases. The two diseases showed dissimilar pathological pathways, with inflammatory lesions in heart and skeletal muscle in Atlantic salmon, and degenerative-necrotic lesions in kidney and liver in Chinook salmon, plausibly explained by differences in PRV load tolerance in red blood cells. Viral genome sequencing revealed no consistent differences in PRV-1 variants intimately involved in the development of both diseases, suggesting that migratory Chinook salmon may be at more than a minimal risk of disease from exposure to the high levels of PRV occurring on salmon farms. |
1811.05456 | Robert Nowak M. | Wiktor Ku\'smirek and Wiktor Franus and Robert Nowak | Linking de novo assembly results with long DNA reads by dnaasm-link
application | 16 pages, 5 figures | null | 10.1155/2019/7847064 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Currently, third-generation sequencing techniques, which allow to obtain much
longer DNA reads compared to the next-generation sequencing technologies, are
becoming more and more popular. There are many possibilities to combine data
from next-generation and third-generation sequencing.
Herein, we present a new application called dnaasm-link for linking contigs,
a result of \textit{de novo} assembly of second-generation sequencing data,
with long DNA reads. Our tool includes an integrated module to fill gaps with a
suitable fragment of appropriate long DNA read, which improves the consistency
of the resulting DNA sequences. This feature is very important, in particular
for complex DNA regions, as presented in the paper. Finally, our implementation
outperforms other state-of-the-art tools in terms of speed and memory
requirements, which may enable the usage of the presented application for
organisms with a large genome, which is not possible in~existing applications.
The presented application has many advantages as (i) significant memory
optimization and reduction of computation time (ii) filling the gaps through
the appropriate fragment of a specified long DNA read (iii) reducing number of
spanned and unspanned gaps in the existing genome drafts.
The application is freely available to all users under GNU Library or Lesser
General Public License version 3.0 (LGPLv3). The demo application, docker image
and source code are available at http://dnaasm.sourceforge.net.
| [
{
"created": "Tue, 13 Nov 2018 18:48:07 GMT",
"version": "v1"
}
] | 2019-05-23 | [
[
"Kuśmirek",
"Wiktor",
""
],
[
"Franus",
"Wiktor",
""
],
[
"Nowak",
"Robert",
""
]
] | Currently, third-generation sequencing techniques, which allow to obtain much longer DNA reads compared to the next-generation sequencing technologies, are becoming more and more popular. There are many possibilities to combine data from next-generation and third-generation sequencing. Herein, we present a new application called dnaasm-link for linking contigs, a result of \textit{de novo} assembly of second-generation sequencing data, with long DNA reads. Our tool includes an integrated module to fill gaps with a suitable fragment of appropriate long DNA read, which improves the consistency of the resulting DNA sequences. This feature is very important, in particular for complex DNA regions, as presented in the paper. Finally, our implementation outperforms other state-of-the-art tools in terms of speed and memory requirements, which may enable the usage of the presented application for organisms with a large genome, which is not possible in~existing applications. The presented application has many advantages as (i) significant memory optimization and reduction of computation time (ii) filling the gaps through the appropriate fragment of a specified long DNA read (iii) reducing number of spanned and unspanned gaps in the existing genome drafts. The application is freely available to all users under GNU Library or Lesser General Public License version 3.0 (LGPLv3). The demo application, docker image and source code are available at http://dnaasm.sourceforge.net. |
1601.03074 | Giuseppe Pontrelli | Giuseppe Pontrelli, Marco Lauricella, Jose' A. Ferreira, Goncalo Pena | Iontophoretic transdermal drug delivery: a multi-layered approach | In Mathematical Medicine and Biology, online, 2016 | null | 10.1093/imammb/dqw017 | null | q-bio.TO math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a multi-layer mathematical model to describe the the transdermal
drug release from an iontophoretic system. The Nernst-Planck equation describe
the basic convection-diffusion process, with the electric potential obtained by
Laplace equation. These equations are complemented with suitable interface and
boundary conditions in a multi-domain. The stability of the mathematical
problem is discussed in different scenarios and a finite-difference method is
used to solve the coupled system. Numerical experiments are included to
illustrate the drug dynamics under different conditions.
| [
{
"created": "Wed, 6 Jan 2016 18:20:54 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Nov 2016 12:52:21 GMT",
"version": "v2"
}
] | 2017-05-31 | [
[
"Pontrelli",
"Giuseppe",
""
],
[
"Lauricella",
"Marco",
""
],
[
"Ferreira",
"Jose' A.",
""
],
[
"Pena",
"Goncalo",
""
]
] | We present a multi-layer mathematical model to describe the the transdermal drug release from an iontophoretic system. The Nernst-Planck equation describe the basic convection-diffusion process, with the electric potential obtained by Laplace equation. These equations are complemented with suitable interface and boundary conditions in a multi-domain. The stability of the mathematical problem is discussed in different scenarios and a finite-difference method is used to solve the coupled system. Numerical experiments are included to illustrate the drug dynamics under different conditions. |
1602.06630 | Haiping Huang | Haiping Huang and Taro Toyoizumi | Clustering of neural codewords revealed by a first-order phase
transition | 14 pages, 5 figures in main text plus Supplemental Material (7 pages) | Phys. Rev. E 93, 062416 (2016) | 10.1103/PhysRevE.93.062416 | null | q-bio.NC cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A network of neurons in the central nervous system collectively represents
information by its spiking activity states. Typically observed states, i.e.,
codewords, occupy only a limited portion of the state space due to constraints
imposed by network interactions. Geometrical organization of codewords in the
state space, critical for neural information processing, is poorly understood
due to its high dimensionality. Here, we explore the organization of neural
codewords using retinal data by computing the entropy of codewords as a
function of Hamming distance from a particular reference codeword.
Specifically, we report that the retinal codewords in the state space are
divided into multiple distinct clusters separated by entropy-gaps, and that
this structure is shared with well-known associative memory networks in a
recallable phase. Our analysis also elucidates a special nature of the
all-silent state. The all-silent state is surrounded by the densest cluster of
codewords and located within a reachable distance from most codewords. This
codeword-space structure quantitatively predicts typical deviation of a
state-trajectory from its initial state. Altogether, our findings reveal a
non-trivial heterogeneous structure of the codeword-space that shapes
information representation in a biological network.
| [
{
"created": "Mon, 22 Feb 2016 02:26:00 GMT",
"version": "v1"
}
] | 2016-06-30 | [
[
"Huang",
"Haiping",
""
],
[
"Toyoizumi",
"Taro",
""
]
] | A network of neurons in the central nervous system collectively represents information by its spiking activity states. Typically observed states, i.e., codewords, occupy only a limited portion of the state space due to constraints imposed by network interactions. Geometrical organization of codewords in the state space, critical for neural information processing, is poorly understood due to its high dimensionality. Here, we explore the organization of neural codewords using retinal data by computing the entropy of codewords as a function of Hamming distance from a particular reference codeword. Specifically, we report that the retinal codewords in the state space are divided into multiple distinct clusters separated by entropy-gaps, and that this structure is shared with well-known associative memory networks in a recallable phase. Our analysis also elucidates a special nature of the all-silent state. The all-silent state is surrounded by the densest cluster of codewords and located within a reachable distance from most codewords. This codeword-space structure quantitatively predicts typical deviation of a state-trajectory from its initial state. Altogether, our findings reveal a non-trivial heterogeneous structure of the codeword-space that shapes information representation in a biological network. |
1708.01746 | Lev Utkin | Lev V. Utkin and Irina L. Utkina | A simple genome-wide association study algorithm | null | null | null | null | q-bio.QM stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A computationally simple genome-wide association study (GWAS) algorithm for
estimating the main and epistatic effects of markers or single nucleotide
polymorphisms (SNPs) is proposed. It is based on the intuitive assumption that
changes of alleles corresponding to important SNPs in a pair of individuals
lead to large difference of phenotype values of these individuals. The
algorithm is based on considering pairs of individuals instead of SNPs or pairs
of SNPs. The main advantage of the algorithm is that it weakly depends on the
number of SNPs in a genotype matrix. It mainly depends on the number of
individuals, which is typically very small in comparison with the number of
SNPs. Numerical experiments with real data sets illustrate the proposed
algorithm.
| [
{
"created": "Sat, 5 Aug 2017 10:47:21 GMT",
"version": "v1"
}
] | 2017-08-08 | [
[
"Utkin",
"Lev V.",
""
],
[
"Utkina",
"Irina L.",
""
]
] | A computationally simple genome-wide association study (GWAS) algorithm for estimating the main and epistatic effects of markers or single nucleotide polymorphisms (SNPs) is proposed. It is based on the intuitive assumption that changes of alleles corresponding to important SNPs in a pair of individuals lead to large difference of phenotype values of these individuals. The algorithm is based on considering pairs of individuals instead of SNPs or pairs of SNPs. The main advantage of the algorithm is that it weakly depends on the number of SNPs in a genotype matrix. It mainly depends on the number of individuals, which is typically very small in comparison with the number of SNPs. Numerical experiments with real data sets illustrate the proposed algorithm. |
2204.09119 | Yueming Li | Yueming Li (1), Ying Jiang (2), Lu Lan (3), Xiaowei Ge (3), Ran Cheng
(4), Yuewei Zhan (5), Guo Chen (3), Linli Shi (4), Runyu Wang (3), Nan Zheng
(6), Chen Yang (3,4), Ji-Xin Cheng (3,5) ((1) Department of Mechanical
Engineering, Boston University, USA, (2) Graduate Program for Neuroscience,
Boston University, USA, (3) Department of Electrical and Computer
Engineering, Boston University, USA, (4) Department of Chemistry, Boston
University, USA, (5) Department of Biomedical Engineering, Boston University,
USA, (6) Division of Materials Science and Engineering, Boston University,
USA) | Optically-generated focused ultrasound for noninvasive brain stimulation
with ultrahigh precision | 36 pages, 5 main figures, 13 supplementary figures | Light Sci Appl 11, 321 (2022) | 10.1038/s41377-022-01004-2 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High precision neuromodulation is a powerful tool to decipher neurocircuits
and treat neurological diseases. Current non-invasive neuromodulation methods
offer limited precision at the millimeter level. Here, we report
optically-generated focused ultrasound (OFUS) for non-invasive brain
stimulation with ultrahigh precision. OFUS is generated by a soft optoacoustic
pad (SOAP) fabricated through embedding candle soot nanoparticles in a curved
polydimethylsiloxane film. SOAP generates a transcranial ultrasound focus at 15
MHz with an ultrahigh lateral resolution of 83 um, which is two orders of
magnitude smaller than that of conventional transcranial-focused ultrasound
(tFUS). Here, we show effective OFUS neurostimulation in vitro with a single
ultrasound cycle. We demonstrate submillimeter transcranial stimulation of the
mouse motor cortex in vivo. An acoustic energy of 0.6 mJ/cm^2, four orders of
magnitude less than that of tFUS, is sufficient for successful OFUS
neurostimulation. OFUS offers new capabilities for neuroscience studies and
disease treatments by delivering a focus with ultrahigh precision
non-invasively.
| [
{
"created": "Tue, 19 Apr 2022 20:33:47 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Nov 2022 20:29:38 GMT",
"version": "v2"
}
] | 2022-11-07 | [
[
"Li",
"Yueming",
""
],
[
"Jiang",
"Ying",
""
],
[
"Lan",
"Lu",
""
],
[
"Ge",
"Xiaowei",
""
],
[
"Cheng",
"Ran",
""
],
[
"Zhan",
"Yuewei",
""
],
[
"Chen",
"Guo",
""
],
[
"Shi",
"Linli",
""
... | High precision neuromodulation is a powerful tool to decipher neurocircuits and treat neurological diseases. Current non-invasive neuromodulation methods offer limited precision at the millimeter level. Here, we report optically-generated focused ultrasound (OFUS) for non-invasive brain stimulation with ultrahigh precision. OFUS is generated by a soft optoacoustic pad (SOAP) fabricated through embedding candle soot nanoparticles in a curved polydimethylsiloxane film. SOAP generates a transcranial ultrasound focus at 15 MHz with an ultrahigh lateral resolution of 83 um, which is two orders of magnitude smaller than that of conventional transcranial-focused ultrasound (tFUS). Here, we show effective OFUS neurostimulation in vitro with a single ultrasound cycle. We demonstrate submillimeter transcranial stimulation of the mouse motor cortex in vivo. An acoustic energy of 0.6 mJ/cm^2, four orders of magnitude less than that of tFUS, is sufficient for successful OFUS neurostimulation. OFUS offers new capabilities for neuroscience studies and disease treatments by delivering a focus with ultrahigh precision non-invasively. |
2107.14751 | Maria-Veronica Ciocanel | Maria-Veronica Ciocanel, Aravind Chandrasekaran, Carli Mager, Qin Ni,
Garegin Papoian, Adriana Dawes | Actin reorganization throughout the cell cycle mediated by motor
proteins | 24 pages, 11 figures | null | 10.1371/journal.pcbi.1010026 | null | q-bio.QM q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cortical actin networks are highly dynamic and play critical roles in shaping
the mechanical properties of cells. The actin cytoskeleton undergoes
significant reorganization over the course of the cell cycle, when cortical
actin transitions between open patched meshworks, homogeneous distributions,
and aligned bundles. Several types of myosin motor proteins, characterized by
different kinetic parameters, have been involved in this reorganization of
actin filaments. Given the limitations in studying the interactions of actin
with myosin in vivo, we propose stochastic agent-based model simulations and
develop a set of data analysis measures to assess how myosin motor proteins
mediate various actin organizations. In particular, we identify individual
motor parameters, such as motor binding rate and step size, that generate actin
networks with different levels of contractility and different patterns of
myosin motor localization. In simulations where two motor populations with
distinct kinetic parameters interact with the same actin network, we find that
motors may act in a complementary way, by tuning the actin network
organization, or in an antagonistic way, where one motor emerges as dominant.
This modeling and data analysis framework also uncovers parameter regimes where
spatial segregation between motor populations is achieved. By allowing for
changes in kinetic rates during the actin-myosin dynamic simulations, our work
suggests that certain actin-myosin organizations may require additional
regulation beyond mediation by motor proteins in order to reconfigure the
cytoskeleton network on experimentally-observed timescales.
| [
{
"created": "Fri, 30 Jul 2021 16:48:44 GMT",
"version": "v1"
}
] | 2022-07-22 | [
[
"Ciocanel",
"Maria-Veronica",
""
],
[
"Chandrasekaran",
"Aravind",
""
],
[
"Mager",
"Carli",
""
],
[
"Ni",
"Qin",
""
],
[
"Papoian",
"Garegin",
""
],
[
"Dawes",
"Adriana",
""
]
] | Cortical actin networks are highly dynamic and play critical roles in shaping the mechanical properties of cells. The actin cytoskeleton undergoes significant reorganization over the course of the cell cycle, when cortical actin transitions between open patched meshworks, homogeneous distributions, and aligned bundles. Several types of myosin motor proteins, characterized by different kinetic parameters, have been involved in this reorganization of actin filaments. Given the limitations in studying the interactions of actin with myosin in vivo, we propose stochastic agent-based model simulations and develop a set of data analysis measures to assess how myosin motor proteins mediate various actin organizations. In particular, we identify individual motor parameters, such as motor binding rate and step size, that generate actin networks with different levels of contractility and different patterns of myosin motor localization. In simulations where two motor populations with distinct kinetic parameters interact with the same actin network, we find that motors may act in a complementary way, by tuning the actin network organization, or in an antagonistic way, where one motor emerges as dominant. This modeling and data analysis framework also uncovers parameter regimes where spatial segregation between motor populations is achieved. By allowing for changes in kinetic rates during the actin-myosin dynamic simulations, our work suggests that certain actin-myosin organizations may require additional regulation beyond mediation by motor proteins in order to reconfigure the cytoskeleton network on experimentally-observed timescales. |
2111.09122 | Markus Pfeil | Markus Pfeil, Thomas Slawig | Shortening the runtime using larger time steps for the simulation of
marine ecosystem models | null | null | null | null | q-bio.PE physics.ao-ph | http://creativecommons.org/licenses/by/4.0/ | The reduction of computational costs for marine ecosystem models is important
for the investigation and detection of the relevant biogeochemical processes
because such models are computationally expensive. In order to lower these
computational costs by means of larger time steps we investigated the accuracy
of steady annual cycles (i.e., an annual periodic solution) calculated with
different time steps. We compared the accuracy for a hierarchy of
biogeochemical models showing an increasing complexity and computed the steady
annual cycles with offline simulations that are based on the transport matrix
approach. For each of these biogeochemical models, we obtained practically the
same solution even though larger time steps. This indicates that larger time
steps shortened the runtime with an acceptable loss of accuracy.
| [
{
"created": "Wed, 17 Nov 2021 13:58:31 GMT",
"version": "v1"
}
] | 2021-11-18 | [
[
"Pfeil",
"Markus",
""
],
[
"Slawig",
"Thomas",
""
]
] | The reduction of computational costs for marine ecosystem models is important for the investigation and detection of the relevant biogeochemical processes because such models are computationally expensive. In order to lower these computational costs by means of larger time steps we investigated the accuracy of steady annual cycles (i.e., an annual periodic solution) calculated with different time steps. We compared the accuracy for a hierarchy of biogeochemical models showing an increasing complexity and computed the steady annual cycles with offline simulations that are based on the transport matrix approach. For each of these biogeochemical models, we obtained practically the same solution even though larger time steps. This indicates that larger time steps shortened the runtime with an acceptable loss of accuracy. |
2010.00387 | Paul Scherer | Paul Scherer, Maja Tr\c{e}bacz, Nikola Simidjievski, Zohreh Shams,
Helena Andres Terre, Pietro Li\`o, Mateja Jamnik | Incorporating network based protein complex discovery into automated
model construction | 7 Pages, 2 Figures | null | null | null | q-bio.MN cs.LG cs.SI stat.ML | http://creativecommons.org/licenses/by/4.0/ | We propose a method for gene expression based analysis of cancer phenotypes
incorporating network biology knowledge through unsupervised construction of
computational graphs. The structural construction of the computational graphs
is driven by the use of topological clustering algorithms on protein-protein
networks which incorporate inductive biases stemming from network biology
research in protein complex discovery. This structurally constrains the
hypothesis space over the possible computational graph factorisation whose
parameters can then be learned through supervised or unsupervised task
settings. The sparse construction of the computational graph enables the
differential protein complex activity analysis whilst also interpreting the
individual contributions of genes/proteins involved in each individual protein
complex. In our experiments analysing a variety of cancer phenotypes, we show
that the proposed methods outperform SVM, Fully-Connected MLP, and
Randomly-Connected MLPs in all tasks. Our work introduces a scalable method for
incorporating large interaction networks as prior knowledge to drive the
construction of powerful computational models amenable to introspective study.
| [
{
"created": "Tue, 29 Sep 2020 18:46:33 GMT",
"version": "v1"
}
] | 2020-10-02 | [
[
"Scherer",
"Paul",
""
],
[
"Trȩbacz",
"Maja",
""
],
[
"Simidjievski",
"Nikola",
""
],
[
"Shams",
"Zohreh",
""
],
[
"Terre",
"Helena Andres",
""
],
[
"Liò",
"Pietro",
""
],
[
"Jamnik",
"Mateja",
""
]
] | We propose a method for gene expression based analysis of cancer phenotypes incorporating network biology knowledge through unsupervised construction of computational graphs. The structural construction of the computational graphs is driven by the use of topological clustering algorithms on protein-protein networks which incorporate inductive biases stemming from network biology research in protein complex discovery. This structurally constrains the hypothesis space over the possible computational graph factorisation whose parameters can then be learned through supervised or unsupervised task settings. The sparse construction of the computational graph enables the differential protein complex activity analysis whilst also interpreting the individual contributions of genes/proteins involved in each individual protein complex. In our experiments analysing a variety of cancer phenotypes, we show that the proposed methods outperform SVM, Fully-Connected MLP, and Randomly-Connected MLPs in all tasks. Our work introduces a scalable method for incorporating large interaction networks as prior knowledge to drive the construction of powerful computational models amenable to introspective study. |
2109.12511 | Naama Brenner | Aseel Shomar, Omri Barak, Naama Brenner | Cancer Progression as a Learning Process | null | null | null | null | q-bio.MN nlin.AO physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | Drug resistance and metastasis - the major complications in cancer - both
entail adaptation of cancer cells to stress, whether a drug or a lethal new
environment. Intriguingly, these adaptive processes share similar features that
cannot be explained by a pure Darwinian scheme, including dormancy, increased
heterogeneity, and stress-induced plasticity. Here, we propose that learning
theory offers a framework to explain these features and may shed light on these
two intricate processes. In this framework, learning is performed at the single
cell level, by stress-driven exploratory trial-and-error. Such a process is not
contingent on pre-existing pathways but on a random search for a state that
diminishes the stress. We review underlying mechanisms that may support this
search, and show by using a learning model that such exploratory adaptation is
feasible in a high dimensional system as the cell. At the population level, we
view the tissue as a network of exploring agents that communicate and restrain
cancer formation in health. In this view, disease results from the breakdown of
homeostasis between cellular exploratory drive and tissue homeostasis.
| [
{
"created": "Sun, 26 Sep 2021 07:01:18 GMT",
"version": "v1"
}
] | 2021-09-28 | [
[
"Shomar",
"Aseel",
""
],
[
"Barak",
"Omri",
""
],
[
"Brenner",
"Naama",
""
]
] | Drug resistance and metastasis - the major complications in cancer - both entail adaptation of cancer cells to stress, whether a drug or a lethal new environment. Intriguingly, these adaptive processes share similar features that cannot be explained by a pure Darwinian scheme, including dormancy, increased heterogeneity, and stress-induced plasticity. Here, we propose that learning theory offers a framework to explain these features and may shed light on these two intricate processes. In this framework, learning is performed at the single cell level, by stress-driven exploratory trial-and-error. Such a process is not contingent on pre-existing pathways but on a random search for a state that diminishes the stress. We review underlying mechanisms that may support this search, and show by using a learning model that such exploratory adaptation is feasible in a high dimensional system as the cell. At the population level, we view the tissue as a network of exploring agents that communicate and restrain cancer formation in health. In this view, disease results from the breakdown of homeostasis between cellular exploratory drive and tissue homeostasis. |
2405.02038 | Arthur Fyon | Arthur Fyon, Alessio Franci, Pierre Sacr\'e, Guillaume Drion | Dimensionality reduction of neuronal degeneracy reveals two interfering
physiological mechanisms | null | null | null | null | q-bio.NC math-ph math.MP q-bio.CB | http://creativecommons.org/licenses/by/4.0/ | Neuronal systems maintain stable functions despite large variability in their
physiological components. Ion channel expression, in particular, is highly
variable in neurons exhibiting similar electrophysiological phenotypes, which
poses questions regarding how specific ion channel subsets reliably shape
neuron intrinsic properties. Here, we use detailed conductance-based modeling
to explore the origin of stable neuronal function from variable channel
composition. Using dimensionality reduction, we uncover two principal
dimensions in the channel conductance space that capture most of the variance
of the observed variability. Those two dimensions correspond to two
physiologically relevant sources of variability that can be explained by
feedback mechanisms underlying regulation of neuronal activity, providing
quantitative insights into how channel composition links to neuronal
electrophysiological activity. These insights allowed us to understand and
design a model-independent, reliable neuromodulation rule for variable neuronal
populations.
| [
{
"created": "Fri, 3 May 2024 12:17:11 GMT",
"version": "v1"
}
] | 2024-05-06 | [
[
"Fyon",
"Arthur",
""
],
[
"Franci",
"Alessio",
""
],
[
"Sacré",
"Pierre",
""
],
[
"Drion",
"Guillaume",
""
]
] | Neuronal systems maintain stable functions despite large variability in their physiological components. Ion channel expression, in particular, is highly variable in neurons exhibiting similar electrophysiological phenotypes, which poses questions regarding how specific ion channel subsets reliably shape neuron intrinsic properties. Here, we use detailed conductance-based modeling to explore the origin of stable neuronal function from variable channel composition. Using dimensionality reduction, we uncover two principal dimensions in the channel conductance space that capture most of the variance of the observed variability. Those two dimensions correspond to two physiologically relevant sources of variability that can be explained by feedback mechanisms underlying regulation of neuronal activity, providing quantitative insights into how channel composition links to neuronal electrophysiological activity. These insights allowed us to understand and design a model-independent, reliable neuromodulation rule for variable neuronal populations. |
2301.10568 | Valeriia Demareva | Grigoriy Radchenko, Valeriia Demareva, Kirill Gromov, Irina Zayceva,
Artem Rulev, Marina Zhukova, and Andrey Demarev | Neural Mechanisms of Temporal and Rhythmic Structure Processing in
Non-Musicians | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Music is increasingly being used as a therapeutic tool in the field of
rehabilitation medicine and psychophysiology. One of the main key components of
music is its temporal organization. The characteristics of neurocognitive
processes during music perception of meter in different tempo variations
technique have been studied by using the event-related potentials technique.
The study involved 20 volunteers (6 men, the median age of the participants was
23 years). The participants were asked to listen to 4 experimental series that
differed in tempo (fast vs. slow) and meter (duple vs. triple). Each series
consisted of 625 audio stimuli, 85% of which were organized with a standard
metric structure (standard stimulus) while 15% included unexpected accents
(deviant stimulus). The results revealed that the type of metric structure
influences the detection of the change in stimuli. The analysis showed that the
N200 wave occurred significantly faster for stimuli with duple meter and fast
tempo and was the slowest for stimuli with triple meter and fast pace.
| [
{
"created": "Wed, 25 Jan 2023 13:17:41 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Feb 2023 07:02:18 GMT",
"version": "v2"
},
{
"created": "Fri, 17 Mar 2023 09:14:40 GMT",
"version": "v3"
}
] | 2023-03-20 | [
[
"Radchenko",
"Grigoriy",
""
],
[
"Demareva",
"Valeriia",
""
],
[
"Gromov",
"Kirill",
""
],
[
"Zayceva",
"Irina",
""
],
[
"Rulev",
"Artem",
""
],
[
"Zhukova",
"Marina",
""
],
[
"Demarev",
"Andrey",
""
]
] | Music is increasingly being used as a therapeutic tool in the field of rehabilitation medicine and psychophysiology. One of the main key components of music is its temporal organization. The characteristics of neurocognitive processes during music perception of meter in different tempo variations technique have been studied by using the event-related potentials technique. The study involved 20 volunteers (6 men, the median age of the participants was 23 years). The participants were asked to listen to 4 experimental series that differed in tempo (fast vs. slow) and meter (duple vs. triple). Each series consisted of 625 audio stimuli, 85% of which were organized with a standard metric structure (standard stimulus) while 15% included unexpected accents (deviant stimulus). The results revealed that the type of metric structure influences the detection of the change in stimuli. The analysis showed that the N200 wave occurred significantly faster for stimuli with duple meter and fast tempo and was the slowest for stimuli with triple meter and fast pace. |
1110.3041 | Jaewook Joo | Matthew Bailey and Jaewook Joo | Identification of network motifs capable of frequency-tunable and robust
oscillation | null | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Oscillation has an important role in bio-dynamical systems such as circadian
rhythms and eukaryotic cell cycle. John Tyson et. al. in Nature Review Mol Cell
Biol 2008 examined a limited number of network topologies consisting of three
nodes and four or fewer edges and identified the network design principles of
biochemical oscillations. Tsai et. al. in Science 2008 studied three different
network motifs, namely a negative feedback loop, coupled negative feedback
loops, and coupled positive and negative feedback loops, and found that the
interconnected positive and negative feedback loops are capable of generating
frequency-tunable oscillations. We enumerate 249 topologically unique network
architectures consisting of three nodes and at least three cyclic inhibitory
edges, and identify network architectural commonalities among three functional
groups: (1) most frequency-tunable yet less robust oscillators, (2) least
frequency-tunable and least robust oscillators, and (3) less frequency-tunable
yet most robust oscillators. We find that Frequency-tunable networks cannot
simultaneously express high robustness, indicating a tradeoff between frequency
tunability and robustness.
| [
{
"created": "Thu, 13 Oct 2011 19:59:14 GMT",
"version": "v1"
}
] | 2011-10-14 | [
[
"Bailey",
"Matthew",
""
],
[
"Joo",
"Jaewook",
""
]
] | Oscillation has an important role in bio-dynamical systems such as circadian rhythms and eukaryotic cell cycle. John Tyson et. al. in Nature Review Mol Cell Biol 2008 examined a limited number of network topologies consisting of three nodes and four or fewer edges and identified the network design principles of biochemical oscillations. Tsai et. al. in Science 2008 studied three different network motifs, namely a negative feedback loop, coupled negative feedback loops, and coupled positive and negative feedback loops, and found that the interconnected positive and negative feedback loops are capable of generating frequency-tunable oscillations. We enumerate 249 topologically unique network architectures consisting of three nodes and at least three cyclic inhibitory edges, and identify network architectural commonalities among three functional groups: (1) most frequency-tunable yet less robust oscillators, (2) least frequency-tunable and least robust oscillators, and (3) less frequency-tunable yet most robust oscillators. We find that Frequency-tunable networks cannot simultaneously express high robustness, indicating a tradeoff between frequency tunability and robustness. |
2309.07194 | Huw Llewelyn Dr | Huw Llewelyn | Clinical dichotomania: A major cause of over-diagnosis and
over-treatment? | null | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Introduction: There have been many warnings that inappropriate
dichotomisation of results into positive or negative, high, or normal etc.,
during medical research could be very damaging. The aim of this paper is to
argue that this is the main cause of over-diagnosis and over-treatment.
Methods: Illustrative data were taken from a randomised control trial (RCT)
that compared the frequency of nephropathy within 2 years in those on treatment
with an angiotensin receptor blocker and a control and on patients in whom the
numerical value of the albumin excretion rate (AER) was available on all
patients before they are randomised. Results: When the RCT results were divided
into AER ranges, a negligible proportion developed nephropathy within 2 years
and benefited from treatment in the range 20 to 40mcg/min in which 36% of
currently treated patients fall (and are thus over-diagnosed and overtreated).
Above an AER of 40mcg/min, there was a gradual increase in proportions with
nephropathy in each range, with fewer developing nephropathy in each range on
irbesartan 150mg daily than on control and fewer still developing nephropathy
on 300mg daily. Interpretation: When logistic regression functions were fitted
to the data and calibrated, curves were created that allowed outcome
probabilities and absolute risk reductions to be estimated for use in shared
decision making (illustrated by application to an example patient). This could
avoid much overdiagnosis and overtreatment. Conclusion: Careful attention to
disease severity by interpreting each numerical diagnostic result provides
better application of the principles of diagnosis and treatment decisions that
can prevent over-diagnosis and over-treatment.
| [
{
"created": "Wed, 13 Sep 2023 13:44:59 GMT",
"version": "v1"
},
{
"created": "Wed, 4 Oct 2023 20:47:13 GMT",
"version": "v2"
}
] | 2023-10-06 | [
[
"Llewelyn",
"Huw",
""
]
] | Introduction: There have been many warnings that inappropriate dichotomisation of results into positive or negative, high, or normal etc., during medical research could be very damaging. The aim of this paper is to argue that this is the main cause of over-diagnosis and over-treatment. Methods: Illustrative data were taken from a randomised control trial (RCT) that compared the frequency of nephropathy within 2 years in those on treatment with an angiotensin receptor blocker and a control and on patients in whom the numerical value of the albumin excretion rate (AER) was available on all patients before they are randomised. Results: When the RCT results were divided into AER ranges, a negligible proportion developed nephropathy within 2 years and benefited from treatment in the range 20 to 40mcg/min in which 36% of currently treated patients fall (and are thus over-diagnosed and overtreated). Above an AER of 40mcg/min, there was a gradual increase in proportions with nephropathy in each range, with fewer developing nephropathy in each range on irbesartan 150mg daily than on control and fewer still developing nephropathy on 300mg daily. Interpretation: When logistic regression functions were fitted to the data and calibrated, curves were created that allowed outcome probabilities and absolute risk reductions to be estimated for use in shared decision making (illustrated by application to an example patient). This could avoid much overdiagnosis and overtreatment. Conclusion: Careful attention to disease severity by interpreting each numerical diagnostic result provides better application of the principles of diagnosis and treatment decisions that can prevent over-diagnosis and over-treatment. |
2008.03238 | Maxwell J. D. Ramstead | Maxwell J. D. Ramstead, Casper Hesp, Alec Tschantz, Ryan Smith, Axel
Constant, Karl Friston | Neural and phenotypic representation under the free-energy principle | 30 pages; 6 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this paper is to leverage the free-energy principle and its
corollary process theory, active inference, to develop a generic, generalizable
model of the representational capacities of living creatures; that is, a theory
of phenotypic representation. Given their ubiquity, we are concerned with
distributed forms of representation (e.g., population codes), whereby patterns
of ensemble activity in living tissue come to represent the causes of sensory
input or data. The active inference framework rests on the Markov blanket
formalism, which allows us to partition systems of interest, such as biological
systems, into internal states, external states, and the blanket (active and
sensory) states that render internal and external states conditionally
independent of each other. In this framework, the representational capacity of
living creatures emerges as a consequence of their Markovian structure and
nonequilibrium dynamics, which together entail a dual-aspect information
geometry. This entails a modest representational capacity: internal states have
an intrinsic information geometry that describes their trajectory over time in
state space, as well as an extrinsic information geometry that allows internal
states to encode (the parameters of) probabilistic beliefs about (fictive)
external states. Building on this, we describe here how, in an automatic and
emergent manner, information about stimuli can come to be encoded by groups of
neurons bound by a Markov blanket; what is known as the neuronal packet
hypothesis. As a concrete demonstration of this type of emergent
representation, we present numerical simulations showing that self-organizing
ensembles of active inference agents sharing the right kind of probabilistic
generative model are able to encode recoverable information about a stimulus
array.
| [
{
"created": "Fri, 7 Aug 2020 15:55:42 GMT",
"version": "v1"
},
{
"created": "Tue, 1 Dec 2020 11:53:00 GMT",
"version": "v2"
}
] | 2020-12-02 | [
[
"Ramstead",
"Maxwell J. D.",
""
],
[
"Hesp",
"Casper",
""
],
[
"Tschantz",
"Alec",
""
],
[
"Smith",
"Ryan",
""
],
[
"Constant",
"Axel",
""
],
[
"Friston",
"Karl",
""
]
] | The aim of this paper is to leverage the free-energy principle and its corollary process theory, active inference, to develop a generic, generalizable model of the representational capacities of living creatures; that is, a theory of phenotypic representation. Given their ubiquity, we are concerned with distributed forms of representation (e.g., population codes), whereby patterns of ensemble activity in living tissue come to represent the causes of sensory input or data. The active inference framework rests on the Markov blanket formalism, which allows us to partition systems of interest, such as biological systems, into internal states, external states, and the blanket (active and sensory) states that render internal and external states conditionally independent of each other. In this framework, the representational capacity of living creatures emerges as a consequence of their Markovian structure and nonequilibrium dynamics, which together entail a dual-aspect information geometry. This entails a modest representational capacity: internal states have an intrinsic information geometry that describes their trajectory over time in state space, as well as an extrinsic information geometry that allows internal states to encode (the parameters of) probabilistic beliefs about (fictive) external states. Building on this, we describe here how, in an automatic and emergent manner, information about stimuli can come to be encoded by groups of neurons bound by a Markov blanket; what is known as the neuronal packet hypothesis. As a concrete demonstration of this type of emergent representation, we present numerical simulations showing that self-organizing ensembles of active inference agents sharing the right kind of probabilistic generative model are able to encode recoverable information about a stimulus array. |
q-bio/0311029 | Hiroaki Takagi | Hiroaki Takagi, Kunihiko Kaneko | Dynamical Systems Basis of Metamorphosis: Diversity and Plasticity of
Cellular States in Reaction Diffusion Network | 26 pages, 15 figures, submitted to Jour. Theor. Biol | null | null | null | q-bio.CB q-bio.TO | null | Dynamics maintaining diversity of cell types in a multi-cellular system are
studied in relationship with the plasticity of cellular states. By adopting a
simple theoretical framework for intra-cellular chemical reaction dynamics with
considering the division and death of cells, developmental process from a
single cell is studied. Cell differentiation process is found to occur through
instability in transient dynamics and cell-cell interaction. In a long time
behavior, extinction of multiple cells is repeated, which leads to itinerancy
over successive quasi-stable multi-cellular states consisting of different
types of cells. By defining the plasticity of a cellular state, it is shown
that the plasticity of cells decreases before the large extinction, from which
diversity and plasticity are recovered. After this switching, decrease of
plasticity again occurs, leading to the next extinction of multiple cells. This
cycle of diversification and extinction is repeated. Relevance of our results
to the development and evolution is briefly discussed.
| [
{
"created": "Fri, 21 Nov 2003 06:49:58 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Takagi",
"Hiroaki",
""
],
[
"Kaneko",
"Kunihiko",
""
]
] | Dynamics maintaining diversity of cell types in a multi-cellular system are studied in relationship with the plasticity of cellular states. By adopting a simple theoretical framework for intra-cellular chemical reaction dynamics with considering the division and death of cells, developmental process from a single cell is studied. Cell differentiation process is found to occur through instability in transient dynamics and cell-cell interaction. In a long time behavior, extinction of multiple cells is repeated, which leads to itinerancy over successive quasi-stable multi-cellular states consisting of different types of cells. By defining the plasticity of a cellular state, it is shown that the plasticity of cells decreases before the large extinction, from which diversity and plasticity are recovered. After this switching, decrease of plasticity again occurs, leading to the next extinction of multiple cells. This cycle of diversification and extinction is repeated. Relevance of our results to the development and evolution is briefly discussed. |
2204.02760 | Xuanyu Zhu | Xuanyu Zhu, Yang Gao, Feng Liu, Stuart Crozier, Hongfu Sun | BFRnet: A deep learning-based MR background field removal method for QSM
of the brain containing significant pathological susceptibility sources | 23 pages, 8 figures, 2 tables | null | null | null | q-bio.QM cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Introduction: Background field removal (BFR) is a critical step required for
successful quantitative susceptibility mapping (QSM). However, eliminating the
background field in brains containing significant susceptibility sources, such
as intracranial hemorrhages, is challenging due to the relatively large scale
of the field induced by these pathological susceptibility sources. Method: This
study proposes a new deep learning-based method, BFRnet, to remove background
field in healthy and hemorrhagic subjects. The network is built with the
dual-frequency octave convolutions on the U-net architecture, trained with
synthetic field maps containing significant susceptibility sources. The BFRnet
method is compared with three conventional BFR methods and one previous deep
learning method using simulated and in vivo brains from 4 healthy and 2
hemorrhagic subjects. Robustness against acquisition field-of-view (FOV)
orientation and brain masking are also investigated. Results: For both
simulation and in vivo experiments, BFRnet led to the best visually appealing
results in the local field and QSM results with the minimum contrast loss and
the most accurate hemorrhage susceptibility measurements among all five
methods. In addition, BFRnet produced the most consistent local field and
susceptibility maps between different sizes of brain masks, while conventional
methods depend drastically on precise brain extraction and further brain edge
erosions. It is also observed that BFRnet performed the best among all BFR
methods for acquisition FOVs oblique to the main magnetic field. Conclusion:
The proposed BFRnet improved the accuracy of local field reconstruction in the
hemorrhagic subjects compared with conventional BFR algorithms. The BFRnet
method was effective for acquisitions of titled orientations and retained whole
brains without edge erosion as often required by traditional BFR methods.
| [
{
"created": "Wed, 6 Apr 2022 12:05:56 GMT",
"version": "v1"
}
] | 2022-04-07 | [
[
"Zhu",
"Xuanyu",
""
],
[
"Gao",
"Yang",
""
],
[
"Liu",
"Feng",
""
],
[
"Crozier",
"Stuart",
""
],
[
"Sun",
"Hongfu",
""
]
] | Introduction: Background field removal (BFR) is a critical step required for successful quantitative susceptibility mapping (QSM). However, eliminating the background field in brains containing significant susceptibility sources, such as intracranial hemorrhages, is challenging due to the relatively large scale of the field induced by these pathological susceptibility sources. Method: This study proposes a new deep learning-based method, BFRnet, to remove background field in healthy and hemorrhagic subjects. The network is built with the dual-frequency octave convolutions on the U-net architecture, trained with synthetic field maps containing significant susceptibility sources. The BFRnet method is compared with three conventional BFR methods and one previous deep learning method using simulated and in vivo brains from 4 healthy and 2 hemorrhagic subjects. Robustness against acquisition field-of-view (FOV) orientation and brain masking are also investigated. Results: For both simulation and in vivo experiments, BFRnet led to the best visually appealing results in the local field and QSM results with the minimum contrast loss and the most accurate hemorrhage susceptibility measurements among all five methods. In addition, BFRnet produced the most consistent local field and susceptibility maps between different sizes of brain masks, while conventional methods depend drastically on precise brain extraction and further brain edge erosions. It is also observed that BFRnet performed the best among all BFR methods for acquisition FOVs oblique to the main magnetic field. Conclusion: The proposed BFRnet improved the accuracy of local field reconstruction in the hemorrhagic subjects compared with conventional BFR algorithms. The BFRnet method was effective for acquisitions of titled orientations and retained whole brains without edge erosion as often required by traditional BFR methods. |
2010.02391 | Serghei Mangul | Dhrithi Deshpande, Karishma Chhugani, Yutong Chang, Aaron Karlsberg,
Caitlin Loeffler, Jinyang Zhang, Agata Muszynska, Jeremy Rotman, Laura Tao,
Brunilda Balliu, Elizabeth Tseng, Eleazar Eskin, Fangqing Zhao, Pejman
Mohammadi, Pawel P Labaj, Serghei Mangul | RNA-seq data science: From raw data to effective interpretation | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | RNA-sequencing (RNA-seq) has become an exemplar technology in modern biology
and clinical applications over the past decade. It has gained immense
popularity in the recent years driven by continuous efforts of the
bioinformatics community to develop accurate and scalable computational tools.
RNA-seq is a method of analyzing the RNA content of a sample using the modern
sequencing platforms. It generates enormous amounts of transcriptomic data in
the form of nucleotide sequences, known as reads. RNA-seq analysis enables the
probing of genes and corresponding transcripts which is essential for answering
important biological questions, such as detecting novel exons, transcripts,
gene expressions, and studying alternative splicing structure. However,
obtaining meaningful biological signals from raw data using computational
methods is challenging due to the limitations of modern sequencing
technologies. The need to leverage these technological challenges have pushed
the rapid development of many novel computational tools which have evolved and
diversified in accordance with technological advancements, leading to the
current myriad population of RNA-seq tools. Our review provides a systemic
overview of RNA-seq technology and 235 available RNA-seq tools across various
domains published from 2008 to 2020, discussing the interdisciplinary nature of
bioinformatics involved in RNA sequencing, analysis, and software development.
| [
{
"created": "Mon, 5 Oct 2020 23:17:28 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Oct 2020 23:20:13 GMT",
"version": "v2"
},
{
"created": "Tue, 16 Feb 2021 18:41:09 GMT",
"version": "v3"
}
] | 2021-02-17 | [
[
"Deshpande",
"Dhrithi",
""
],
[
"Chhugani",
"Karishma",
""
],
[
"Chang",
"Yutong",
""
],
[
"Karlsberg",
"Aaron",
""
],
[
"Loeffler",
"Caitlin",
""
],
[
"Zhang",
"Jinyang",
""
],
[
"Muszynska",
"Agata",
""
... | RNA-sequencing (RNA-seq) has become an exemplar technology in modern biology and clinical applications over the past decade. It has gained immense popularity in the recent years driven by continuous efforts of the bioinformatics community to develop accurate and scalable computational tools. RNA-seq is a method of analyzing the RNA content of a sample using the modern sequencing platforms. It generates enormous amounts of transcriptomic data in the form of nucleotide sequences, known as reads. RNA-seq analysis enables the probing of genes and corresponding transcripts which is essential for answering important biological questions, such as detecting novel exons, transcripts, gene expressions, and studying alternative splicing structure. However, obtaining meaningful biological signals from raw data using computational methods is challenging due to the limitations of modern sequencing technologies. The need to leverage these technological challenges have pushed the rapid development of many novel computational tools which have evolved and diversified in accordance with technological advancements, leading to the current myriad population of RNA-seq tools. Our review provides a systemic overview of RNA-seq technology and 235 available RNA-seq tools across various domains published from 2008 to 2020, discussing the interdisciplinary nature of bioinformatics involved in RNA sequencing, analysis, and software development. |
2212.05320 | Pilar Cossio Dr. | Wai Shing Tang, David Silva-S\'anchez, Julian Giraldo-Barreto, Bob
Carpenter, Sonya Hanson, Alex H. Barnett, Erik H. Thiede, Pilar Cossio | Ensemble reweighting using Cryo-EM particles | null | null | null | null | q-bio.BM physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | Cryo-electron microscopy (cryo-EM) has recently become a premier method for
obtaining high-resolution structures of biological macromolecules. However, it
is limited to biomolecular samples with low conformational heterogeneity, where
all the conformations can be well-sampled at many projection angles. While
cryo-EM technically provides single-molecule data for heterogeneous molecules,
most existing reconstruction tools cannot extract the full distribution of
possible molecular configurations. To overcome these limitations, we build on a
prior Bayesian approach and develop an ensemble refinement framework that
estimates the ensemble density from a set of cryo-EM particles by reweighting a
prior ensemble of conformations, e.g., from molecular dynamics simulations or
structure prediction tools. Our work is a general approach to recovering the
equilibrium probability density of the biomolecule directly in conformational
space from single-molecule data. To validate the framework, we study the
extraction of state populations and free energies for a simple toy model and
from synthetic cryo-EM images of a simulated protein that explores multiple
folded and unfolded conformations.
| [
{
"created": "Sat, 10 Dec 2022 15:01:15 GMT",
"version": "v1"
}
] | 2022-12-13 | [
[
"Tang",
"Wai Shing",
""
],
[
"Silva-Sánchez",
"David",
""
],
[
"Giraldo-Barreto",
"Julian",
""
],
[
"Carpenter",
"Bob",
""
],
[
"Hanson",
"Sonya",
""
],
[
"Barnett",
"Alex H.",
""
],
[
"Thiede",
"Erik H.",
... | Cryo-electron microscopy (cryo-EM) has recently become a premier method for obtaining high-resolution structures of biological macromolecules. However, it is limited to biomolecular samples with low conformational heterogeneity, where all the conformations can be well-sampled at many projection angles. While cryo-EM technically provides single-molecule data for heterogeneous molecules, most existing reconstruction tools cannot extract the full distribution of possible molecular configurations. To overcome these limitations, we build on a prior Bayesian approach and develop an ensemble refinement framework that estimates the ensemble density from a set of cryo-EM particles by reweighting a prior ensemble of conformations, e.g., from molecular dynamics simulations or structure prediction tools. Our work is a general approach to recovering the equilibrium probability density of the biomolecule directly in conformational space from single-molecule data. To validate the framework, we study the extraction of state populations and free energies for a simple toy model and from synthetic cryo-EM images of a simulated protein that explores multiple folded and unfolded conformations. |
0810.4724 | David Basanta | David Basanta, Haralambos Hatzikirou, Andreas Deutsch | Studying the emergence of invasiveness in tumours using game theory | null | null | 10.1140/epjb/e2008-00249-y | null | q-bio.TO q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tumour cells have to acquire a number of capabilities if a neoplasm is to
become a cancer. One of these key capabilities is increased motility which is
needed for invasion of other tissues and metastasis. This paper presents a
qualitative mathematical model based on game theory and computer simulations
using cellular automata. With this model we study the circumstances under which
mutations that confer increased motility to cells can spread through a tumour
made of rapidly proliferating cells. The analysis suggests therapies that could
help prevent the progression towards malignancy and invasiveness of benign
tumours.
| [
{
"created": "Sun, 26 Oct 2008 22:01:49 GMT",
"version": "v1"
}
] | 2009-09-29 | [
[
"Basanta",
"David",
""
],
[
"Hatzikirou",
"Haralambos",
""
],
[
"Deutsch",
"Andreas",
""
]
] | Tumour cells have to acquire a number of capabilities if a neoplasm is to become a cancer. One of these key capabilities is increased motility which is needed for invasion of other tissues and metastasis. This paper presents a qualitative mathematical model based on game theory and computer simulations using cellular automata. With this model we study the circumstances under which mutations that confer increased motility to cells can spread through a tumour made of rapidly proliferating cells. The analysis suggests therapies that could help prevent the progression towards malignancy and invasiveness of benign tumours. |
2202.13980 | Xiao Zhou | Xiao Zhou, Xiaohu Zhang, Paolo Santi, and Carlo Ratti | Evaluation of non-pharmaceutical interventions and optimal strategies
for containing the COVID-19 pandemic | 16 pages, 7 figures | null | null | null | q-bio.PE cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given multiple new COVID-19 variants are continuously emerging,
non-pharmaceutical interventions are still primary control strategies to curb
the further spread of coronavirus. However, implementing strict interventions
over extended periods of time is inevitably hurting the economy. With an aim to
solve this multi-objective decision-making problem, we investigate the
underlying associations between policies, mobility patterns, and virus
transmission. We further evaluate the relative performance of existing COVID-19
control measures and explore potential optimal strategies that can strike the
right balance between public health and socio-economic recovery for individual
states in the US. The results highlight the power of state of emergency
declaration and wearing face masks and emphasize the necessity of pursuing
tailor-made strategies for different states and phases of epidemiological
transmission. Our framework enables policymakers to create more refined designs
of COVID-19 strategies and can be extended to inform policy makers of any
country about best practices in pandemic response.
| [
{
"created": "Mon, 28 Feb 2022 17:33:25 GMT",
"version": "v1"
}
] | 2022-03-01 | [
[
"Zhou",
"Xiao",
""
],
[
"Zhang",
"Xiaohu",
""
],
[
"Santi",
"Paolo",
""
],
[
"Ratti",
"Carlo",
""
]
] | Given multiple new COVID-19 variants are continuously emerging, non-pharmaceutical interventions are still primary control strategies to curb the further spread of coronavirus. However, implementing strict interventions over extended periods of time is inevitably hurting the economy. With an aim to solve this multi-objective decision-making problem, we investigate the underlying associations between policies, mobility patterns, and virus transmission. We further evaluate the relative performance of existing COVID-19 control measures and explore potential optimal strategies that can strike the right balance between public health and socio-economic recovery for individual states in the US. The results highlight the power of state of emergency declaration and wearing face masks and emphasize the necessity of pursuing tailor-made strategies for different states and phases of epidemiological transmission. Our framework enables policymakers to create more refined designs of COVID-19 strategies and can be extended to inform policy makers of any country about best practices in pandemic response. |
1804.00970 | Wolfgang Fuhl | Wolfgang Fuhl, Thiago Santini, Thomas Kuebler, Nora Castner, Wolfgang
Rosenstiel, Enkelejda Kasneci | Eye movement simulation and detector creation to reduce laborious
parameter adjustments | null | null | null | null | q-bio.NC cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Eye movements hold information about human perception, intention and
cognitive state. Various algorithms have been proposed to identify and
distinguish eye movements, particularly fixations, saccades, and smooth
pursuits. A major drawback of existing algorithms is that they rely on accurate
and constant sampling rates, impeding straightforward adaptation to new
movements such as micro saccades. We propose a novel eye movement simulator
that i) probabilistically simulates saccade movements as gamma distributions
considering different peak velocities and ii) models smooth pursuit onsets with
the sigmoid function. This simulator is combined with a machine learning
approach to create detectors for general and specific velocity profiles.
Additionally, our approach is capable of using any sampling rate, even with
fluctuations. The machine learning approach consists of different binary
patterns combined using conditional distributions. The simulation is evaluated
against publicly available real data using a squared error, and the detectors
are evaluated against state-of-the-art algorithms.
| [
{
"created": "Wed, 28 Mar 2018 06:48:37 GMT",
"version": "v1"
}
] | 2018-04-04 | [
[
"Fuhl",
"Wolfgang",
""
],
[
"Santini",
"Thiago",
""
],
[
"Kuebler",
"Thomas",
""
],
[
"Castner",
"Nora",
""
],
[
"Rosenstiel",
"Wolfgang",
""
],
[
"Kasneci",
"Enkelejda",
""
]
] | Eye movements hold information about human perception, intention and cognitive state. Various algorithms have been proposed to identify and distinguish eye movements, particularly fixations, saccades, and smooth pursuits. A major drawback of existing algorithms is that they rely on accurate and constant sampling rates, impeding straightforward adaptation to new movements such as micro saccades. We propose a novel eye movement simulator that i) probabilistically simulates saccade movements as gamma distributions considering different peak velocities and ii) models smooth pursuit onsets with the sigmoid function. This simulator is combined with a machine learning approach to create detectors for general and specific velocity profiles. Additionally, our approach is capable of using any sampling rate, even with fluctuations. The machine learning approach consists of different binary patterns combined using conditional distributions. The simulation is evaluated against publicly available real data using a squared error, and the detectors are evaluated against state-of-the-art algorithms. |
1809.07247 | Phil Nelson | Philip C. Nelson | Time to Stop Telling Biophysics Students that Light is Primarily a Wave | null | Biophys. J. 114: 761--765 (2018) | 10.1016/j.bpj.2017.12.036 | null | q-bio.OT physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Standard pedagogy introduces optics as though it were a consequence of
Maxwell's equations, and only grudgingly admits, usually in a rushed aside,
that light has a particulate character that can somehow be reconciled with the
wave picture. Recent revolutionary advances in optical imaging, however, make
this approach more and more unhelpful: How are we to describe two-photon
imaging, FRET, localization microscopy, and a host of related techniques to
students who think of light primarily as a wave? I was surprised to find that
everything I wanted my biophysics students to know about light, including image
formation, x-ray diffraction, and even Bessel beams, could be expressed as well
(or better) from the quantum viewpoint pioneered by Richard Feynman. Even my
undergraduate students grasp this viewpoint as well as (or better than) the
traditional one, and by mid-semester they are already well positioned to
integrate the latest advances into their understanding. Moreover, I have found
that this approach clarifies my own understanding of new techniques.
| [
{
"created": "Sat, 15 Sep 2018 00:48:37 GMT",
"version": "v1"
}
] | 2018-09-20 | [
[
"Nelson",
"Philip C.",
""
]
] | Standard pedagogy introduces optics as though it were a consequence of Maxwell's equations, and only grudgingly admits, usually in a rushed aside, that light has a particulate character that can somehow be reconciled with the wave picture. Recent revolutionary advances in optical imaging, however, make this approach more and more unhelpful: How are we to describe two-photon imaging, FRET, localization microscopy, and a host of related techniques to students who think of light primarily as a wave? I was surprised to find that everything I wanted my biophysics students to know about light, including image formation, x-ray diffraction, and even Bessel beams, could be expressed as well (or better) from the quantum viewpoint pioneered by Richard Feynman. Even my undergraduate students grasp this viewpoint as well as (or better than) the traditional one, and by mid-semester they are already well positioned to integrate the latest advances into their understanding. Moreover, I have found that this approach clarifies my own understanding of new techniques. |
q-bio/0610054 | Martin Howard | Richard P. Sear (University of Surrey) and Martin Howard (Imperial
College London) | Modeling Dual Pathways for the Metazoan Spindle Assembly Checkpoint | 9 pages, 2 figures | Proc. Natl. Acad. Sci. 103 16758-16763 (2006) | 10.1073/pnas.0603174103 | null | q-bio.SC cond-mat.stat-mech | null | Using computational modelling, we investigate mechanisms of signal
transduction focusing on the spindle assembly checkpoint where a single
unattached kinetochore is able to signal to prevent cell cycle progression.
This inhibitory signal switches off rapidly once spindle microtubules have
attached to all kinetochores. This requirement tightly constrains the possible
mechanisms. Here we investigate two possible mechanisms for spindle checkpoint
operation in metazoan cells, both supported by recent experiments. The first
involves the free diffusion and sequestration of cell-cycle regulators. This
mechanism is severely constrained both by experimental fluorescence recovery
data and also by the large volumes involved in open mitosis in metazoan cells.
Using a simple mathematical analysis and computer simulation, we find that this
mechanism can generate the inhibition found in experiment but likely requires a
two stage signal amplification cascade. The second mechanism involves spatial
gradients of a short-lived inhibitory signal that propagates first by diffusion
but then primarily via active transport along spindle microtubules. We propose
that both mechanisms may be operative in the metazoan spindle assembly
checkpoint, with either able to trigger anaphase onset even without support
from the other pathway.
| [
{
"created": "Sat, 28 Oct 2006 22:37:59 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Sear",
"Richard P.",
"",
"University of Surrey"
],
[
"Howard",
"Martin",
"",
"Imperial\n College London"
]
] | Using computational modelling, we investigate mechanisms of signal transduction focusing on the spindle assembly checkpoint where a single unattached kinetochore is able to signal to prevent cell cycle progression. This inhibitory signal switches off rapidly once spindle microtubules have attached to all kinetochores. This requirement tightly constrains the possible mechanisms. Here we investigate two possible mechanisms for spindle checkpoint operation in metazoan cells, both supported by recent experiments. The first involves the free diffusion and sequestration of cell-cycle regulators. This mechanism is severely constrained both by experimental fluorescence recovery data and also by the large volumes involved in open mitosis in metazoan cells. Using a simple mathematical analysis and computer simulation, we find that this mechanism can generate the inhibition found in experiment but likely requires a two stage signal amplification cascade. The second mechanism involves spatial gradients of a short-lived inhibitory signal that propagates first by diffusion but then primarily via active transport along spindle microtubules. We propose that both mechanisms may be operative in the metazoan spindle assembly checkpoint, with either able to trigger anaphase onset even without support from the other pathway. |
2304.10905 | Antonia Mey | Nicholas T. Runcie, Antonia S. J. S. Mey | SILVR: Guided Diffusion for Molecule Generation | paper, 20 paper, 11 figures | null | 10.1021/acs.jcim.3c00667 | null | q-bio.BM stat.ML | http://creativecommons.org/licenses/by/4.0/ | Computationally generating novel synthetically accessible compounds with high
affinity and low toxicity is a great challenge in drug design. Machine-learning
models beyond conventional pharmacophoric methods have shown promise in
generating novel small molecule compounds, but require significant tuning for a
specific protein target. Here, we introduce a method called selective iterative
latent variable refinement (SILVR) for conditioning an existing diffusion-based
equivariant generative model without retraining. The model allows the
generation of new molecules that fit into a binding site of a protein based on
fragment hits. We use the SARS-CoV-2 Main protease fragments from Diamond
X-Chem that form part of the COVID Moonshot project as a reference dataset for
conditioning the molecule generation. The SILVR rate controls the extent of
conditioning and we show that moderate SILVR rates make it possible to generate
new molecules of similar shape to the original fragments, meaning that the new
molecules fit the binding site without knowledge of the protein. We can also
merge up to 3 fragments into a new molecule without affecting the quality of
molecules generated by the underlying generative model. Our method is
generalizable to any protein target with known fragments and any
diffusion-based model for molecule generation.
| [
{
"created": "Fri, 21 Apr 2023 11:47:38 GMT",
"version": "v1"
}
] | 2023-10-30 | [
[
"Runcie",
"Nicholas T.",
""
],
[
"Mey",
"Antonia S. J. S.",
""
]
] | Computationally generating novel synthetically accessible compounds with high affinity and low toxicity is a great challenge in drug design. Machine-learning models beyond conventional pharmacophoric methods have shown promise in generating novel small molecule compounds, but require significant tuning for a specific protein target. Here, we introduce a method called selective iterative latent variable refinement (SILVR) for conditioning an existing diffusion-based equivariant generative model without retraining. The model allows the generation of new molecules that fit into a binding site of a protein based on fragment hits. We use the SARS-CoV-2 Main protease fragments from Diamond X-Chem that form part of the COVID Moonshot project as a reference dataset for conditioning the molecule generation. The SILVR rate controls the extent of conditioning and we show that moderate SILVR rates make it possible to generate new molecules of similar shape to the original fragments, meaning that the new molecules fit the binding site without knowledge of the protein. We can also merge up to 3 fragments into a new molecule without affecting the quality of molecules generated by the underlying generative model. Our method is generalizable to any protein target with known fragments and any diffusion-based model for molecule generation. |
1301.4144 | Dimitris Vavoulis | Dimitrios V. Vavoulis and Julian Gough | Non-parametric Bayesian modelling of digital gene expression data | null | J Comput Sci Syst Biol 7:001-009 (2013) | 10.4172/jcsb.1000131 | null | q-bio.QM q-bio.GN stat.AP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Next-generation sequencing technologies provide a revolutionary tool for
generating gene expression data. Starting with a fixed RNA sample, they
construct a library of millions of differentially abundant short sequence tags
or "reads", which constitute a fundamentally discrete measure of the level of
gene expression. A common limitation in experiments using these technologies is
the low number or even absence of biological replicates, which complicates the
statistical analysis of digital gene expression data. Analysis of this type of
data has often been based on modified tests originally devised for analysing
microarrays; both these and even de novo methods for the analysis of RNA-seq
data are plagued by the common problem of low replication. We propose a novel,
non-parametric Bayesian approach for the analysis of digital gene expression
data. We begin with a hierarchical model for modelling over-dispersed count
data and a blocked Gibbs sampling algorithm for inferring the posterior
distribution of model parameters conditional on these counts. The algorithm
compensates for the problem of low numbers of biological replicates by
clustering together genes with tag counts that are likely sampled from a common
distribution and using this augmented sample for estimating the parameters of
this distribution. The number of clusters is not decided a priori, but it is
inferred along with the remaining model parameters. We demonstrate the ability
of this approach to model biological data with high fidelity by applying the
algorithm on a public dataset obtained from cancerous and non-cancerous neural
tissues.
| [
{
"created": "Thu, 17 Jan 2013 16:08:00 GMT",
"version": "v1"
}
] | 2014-05-13 | [
[
"Vavoulis",
"Dimitrios V.",
""
],
[
"Gough",
"Julian",
""
]
] | Next-generation sequencing technologies provide a revolutionary tool for generating gene expression data. Starting with a fixed RNA sample, they construct a library of millions of differentially abundant short sequence tags or "reads", which constitute a fundamentally discrete measure of the level of gene expression. A common limitation in experiments using these technologies is the low number or even absence of biological replicates, which complicates the statistical analysis of digital gene expression data. Analysis of this type of data has often been based on modified tests originally devised for analysing microarrays; both these and even de novo methods for the analysis of RNA-seq data are plagued by the common problem of low replication. We propose a novel, non-parametric Bayesian approach for the analysis of digital gene expression data. We begin with a hierarchical model for modelling over-dispersed count data and a blocked Gibbs sampling algorithm for inferring the posterior distribution of model parameters conditional on these counts. The algorithm compensates for the problem of low numbers of biological replicates by clustering together genes with tag counts that are likely sampled from a common distribution and using this augmented sample for estimating the parameters of this distribution. The number of clusters is not decided a priori, but it is inferred along with the remaining model parameters. We demonstrate the ability of this approach to model biological data with high fidelity by applying the algorithm on a public dataset obtained from cancerous and non-cancerous neural tissues. |
2011.03946 | Barbara Valle | Barbara Valle, Roberto Ambrosini, Marco Caccianiga, Mauro Gobbi | Ecology of the cold-adapted species Nebria germari (Coleoptera:
Carabidae): the role of supraglacial stony debris as refugium during the
current interglacial period | 20 pages, 4 tables, 3 figures. This is the pre-print version in the
non-final form, accepted for publication on Acta Zoologica Hungarica after
the editing changes are made | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In the current scenario of climate change, cold-adapted insects are among the
most threatened organisms in high-altitude habitats of the Alps. Upslope shifts
and changes in phenology are two of the most investigated responses to climate
change, but there is an increasing interest in evaluating the presence of
high-altitude landforms acting as refugia. Nebria germari Heer, 1837
(Coleoptera: Carabidae) is a hygrophilic and cold-adapted species that still
exhibits large populations on supraglacial debris of the Eastern Alps. This
work aims at describing the ecology and phenology of the populations living on
supraglacial debris. To this end, we analysed the populations from three
Dolomitic glaciers whose surfaces are partially covered by stony debris. We
found that supraglacial debris is characterised by more stable colder and
wetter conditions than the surrounding debris slopes and by a shorter snow-free
period. The populations found on supraglacial debris were spring breeders,
differently from those documented in the 1980s on Dolomitic high alpine
grasslands, which were reported as autumn breeders. Currently Nebria germari
seems therefore to find a suitable habitat on supraglacial debris, where
micrometeorological conditions are appropriate for its life-cycle and
competition and predation are reduced.
| [
{
"created": "Sun, 8 Nov 2020 10:28:48 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Nov 2020 08:10:09 GMT",
"version": "v2"
}
] | 2020-11-23 | [
[
"Valle",
"Barbara",
""
],
[
"Ambrosini",
"Roberto",
""
],
[
"Caccianiga",
"Marco",
""
],
[
"Gobbi",
"Mauro",
""
]
] | In the current scenario of climate change, cold-adapted insects are among the most threatened organisms in high-altitude habitats of the Alps. Upslope shifts and changes in phenology are two of the most investigated responses to climate change, but there is an increasing interest in evaluating the presence of high-altitude landforms acting as refugia. Nebria germari Heer, 1837 (Coleoptera: Carabidae) is a hygrophilic and cold-adapted species that still exhibits large populations on supraglacial debris of the Eastern Alps. This work aims at describing the ecology and phenology of the populations living on supraglacial debris. To this end, we analysed the populations from three Dolomitic glaciers whose surfaces are partially covered by stony debris. We found that supraglacial debris is characterised by more stable colder and wetter conditions than the surrounding debris slopes and by a shorter snow-free period. The populations found on supraglacial debris were spring breeders, differently from those documented in the 1980s on Dolomitic high alpine grasslands, which were reported as autumn breeders. Currently Nebria germari seems therefore to find a suitable habitat on supraglacial debris, where micrometeorological conditions are appropriate for its life-cycle and competition and predation are reduced. |
2204.09486 | Zhuangwei Shi | Zhuangwei Shi, Bo Li | Graph neural networks and attention-based CNN-LSTM for protein
classification | Briefings of project outcomes about deep learning on protein
classification | null | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by/4.0/ | This paper focuses on three critical problems on protein classification.
Firstly, Carbohydrate-active enzyme (CAZyme) classification can help people to
understand the properties of enzymes. However, one CAZyme may belong to several
classes. This leads to Multi-label CAZyme classification. Secondly, to capture
information from the secondary structure of protein, protein classification is
modeled as graph classification problem. Thirdly, compound-protein interactions
prediction employs graph learning for compound with sequential embedding for
protein. This can be seen as classification task for compound-protein pairs.
This paper proposes three models for protein classification. Firstly, this
paper proposes a Multi-label CAZyme classification model using CNN-LSTM with
Attention mechanism. Secondly, this paper proposes a variational graph
autoencoder based subspace learning model for protein graph classification.
Thirdly, this paper proposes graph isomorphism networks (GIN) and
Attention-based CNN-LSTM for compound-protein interactions prediction, as well
as comparing GIN with graph convolution networks (GCN) and graph attention
networks (GAT) in this task. The proposed models are effective for protein
classification. Source code and data are available at
https://github.com/zshicode/GNN-AttCL-protein. Besides, this repository
collects and collates the benchmark datasets with respect to above problems,
including CAZyme classification, enzyme protein graph classification,
compound-protein interactions prediction, drug-target affinities prediction and
drug-drug interactions prediction. Hence, the usage for evaluation by benchmark
datasets can be more conveniently.
| [
{
"created": "Wed, 20 Apr 2022 14:34:29 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Feb 2023 05:40:04 GMT",
"version": "v2"
}
] | 2023-02-23 | [
[
"Shi",
"Zhuangwei",
""
],
[
"Li",
"Bo",
""
]
] | This paper focuses on three critical problems on protein classification. Firstly, Carbohydrate-active enzyme (CAZyme) classification can help people to understand the properties of enzymes. However, one CAZyme may belong to several classes. This leads to Multi-label CAZyme classification. Secondly, to capture information from the secondary structure of protein, protein classification is modeled as graph classification problem. Thirdly, compound-protein interactions prediction employs graph learning for compound with sequential embedding for protein. This can be seen as classification task for compound-protein pairs. This paper proposes three models for protein classification. Firstly, this paper proposes a Multi-label CAZyme classification model using CNN-LSTM with Attention mechanism. Secondly, this paper proposes a variational graph autoencoder based subspace learning model for protein graph classification. Thirdly, this paper proposes graph isomorphism networks (GIN) and Attention-based CNN-LSTM for compound-protein interactions prediction, as well as comparing GIN with graph convolution networks (GCN) and graph attention networks (GAT) in this task. The proposed models are effective for protein classification. Source code and data are available at https://github.com/zshicode/GNN-AttCL-protein. Besides, this repository collects and collates the benchmark datasets with respect to above problems, including CAZyme classification, enzyme protein graph classification, compound-protein interactions prediction, drug-target affinities prediction and drug-drug interactions prediction. Hence, the usage for evaluation by benchmark datasets can be more conveniently. |
1705.09509 | Boris Barbour | Boris Barbour | Inverse sensitivity of plasmonic nanosensors at the single-molecule
limit | null | null | null | null | q-bio.QM physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent work using plasmonic nanosensors in a clinically relevant detection
assay reports extreme sensitivity based upon a mechanism termed 'inverse
sensitivity', whereby reduction of substrate concentration increases reaction
rate, even at the single-molecule limit. This near-homoeopathic mechanism
contradicts the law of mass action. The assay involves deposition of silver
atoms upon gold nanostars, changing their absorption spectrum. Multiple
additional aspects of the assay appear to be incompatible with settled chemical
knowledge, in particular the detection of tiny numbers of silver atoms on a
background of the classic 'silver mirror reaction'. Finally, it is estimated
here that the reported spectral changes require some 2.5E11 times more silver
atoms than are likely to be produced. It is suggested that alternative
explanations must be sought for the original observations.
| [
{
"created": "Fri, 26 May 2017 10:06:10 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Jun 2017 19:59:07 GMT",
"version": "v2"
}
] | 2017-06-05 | [
[
"Barbour",
"Boris",
""
]
] | Recent work using plasmonic nanosensors in a clinically relevant detection assay reports extreme sensitivity based upon a mechanism termed 'inverse sensitivity', whereby reduction of substrate concentration increases reaction rate, even at the single-molecule limit. This near-homoeopathic mechanism contradicts the law of mass action. The assay involves deposition of silver atoms upon gold nanostars, changing their absorption spectrum. Multiple additional aspects of the assay appear to be incompatible with settled chemical knowledge, in particular the detection of tiny numbers of silver atoms on a background of the classic 'silver mirror reaction'. Finally, it is estimated here that the reported spectral changes require some 2.5E11 times more silver atoms than are likely to be produced. It is suggested that alternative explanations must be sought for the original observations. |
1806.08840 | Hamid Eghbal-zadeh | Hamid Eghbal-zadeh, Lukas Fischer, Niko Popitsch, Florian Kromp,
Sabine Taschner-Mandl, Khaled Koutini, Teresa Gerber, Eva Bozsaky, Peter F.
Ambros, Inge M. Ambros, Gerhard Widmer and Bernhard A. Moser | Deep SNP: An End-to-end Deep Neural Network with Attention-based
Localization for Break-point Detection in SNP Array Genomic data | Accepted at the Joint ICML and IJCAI 2018 Workshop on Computational
Biology | null | null | null | q-bio.GN cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diagnosis and risk stratification of cancer and many other diseases require
the detection of genomic breakpoints as a prerequisite of calling copy number
alterations (CNA). This, however, is still challenging and requires
time-consuming manual curation. As deep-learning methods outperformed classical
state-of-the-art algorithms in various domains and have also been successfully
applied to life science problems including medicine and biology, we here
propose Deep SNP, a novel Deep Neural Network to learn from genomic data.
Specifically, we used a manually curated dataset from 12 genomic single
nucleotide polymorphism array (SNPa) profiles as truth-set and aimed at
predicting the presence or absence of genomic breakpoints, an indicator of
structural chromosomal variations, in windows of 40,000 probes. We compare our
results with well-known neural network models as well as Rawcopy though this
tool is designed to predict breakpoints and in addition genomic segments with
high sensitivity. We show, that Deep SNP is capable of successfully predicting
the presence or absence of a breakpoint in large genomic windows and
outperforms state-of-the-art neural network models. Qualitative examples
suggest that integration of a localization unit may enable breakpoint detection
and prediction of genomic segments, even if the breakpoint coordinates were not
provided for network training. These results warrant further evaluation of
DeepSNP for breakpoint localization and subsequent calling of genomic segments.
| [
{
"created": "Fri, 22 Jun 2018 20:15:13 GMT",
"version": "v1"
}
] | 2018-06-26 | [
[
"Eghbal-zadeh",
"Hamid",
""
],
[
"Fischer",
"Lukas",
""
],
[
"Popitsch",
"Niko",
""
],
[
"Kromp",
"Florian",
""
],
[
"Taschner-Mandl",
"Sabine",
""
],
[
"Koutini",
"Khaled",
""
],
[
"Gerber",
"Teresa",
""
... | Diagnosis and risk stratification of cancer and many other diseases require the detection of genomic breakpoints as a prerequisite of calling copy number alterations (CNA). This, however, is still challenging and requires time-consuming manual curation. As deep-learning methods outperformed classical state-of-the-art algorithms in various domains and have also been successfully applied to life science problems including medicine and biology, we here propose Deep SNP, a novel Deep Neural Network to learn from genomic data. Specifically, we used a manually curated dataset from 12 genomic single nucleotide polymorphism array (SNPa) profiles as truth-set and aimed at predicting the presence or absence of genomic breakpoints, an indicator of structural chromosomal variations, in windows of 40,000 probes. We compare our results with well-known neural network models as well as Rawcopy though this tool is designed to predict breakpoints and in addition genomic segments with high sensitivity. We show, that Deep SNP is capable of successfully predicting the presence or absence of a breakpoint in large genomic windows and outperforms state-of-the-art neural network models. Qualitative examples suggest that integration of a localization unit may enable breakpoint detection and prediction of genomic segments, even if the breakpoint coordinates were not provided for network training. These results warrant further evaluation of DeepSNP for breakpoint localization and subsequent calling of genomic segments. |
1607.01384 | Renzhi Cao | Renzhi Cao, Zhaolong Zhong, Jianlin Cheng | SMISS: A protein function prediction server by integrating multiple
sources | 13 pages, 7 figures | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | SMISS is a novel web server for protein function prediction. Three different
predictors can be selected for different usage. It integrates different sources
to improve the protein function prediction accuracy, including the query
protein sequence, protein-protein interaction network, gene-gene interaction
network, and the rules mined from protein function associations. SMISS
automatically switch to ab initio protein function prediction based on the
query sequence when there is no homologs in the database. It takes fasta format
sequences as input, and several sequences can submit together without
influencing the computation speed too much. PHP and Perl are two primary
programming language used in the server. The CodeIgniter MVC PHP web framework
and Bootstrap front-end framework are used for building the server. It can be
used in different platforms in standard web browser, such as Windows, Mac OS X,
Linux, and iOS. No plugins are needed for our website. Availability:
http://tulip.rnet.missouri.edu/profunc/.
| [
{
"created": "Tue, 22 Mar 2016 00:50:31 GMT",
"version": "v1"
}
] | 2016-07-06 | [
[
"Cao",
"Renzhi",
""
],
[
"Zhong",
"Zhaolong",
""
],
[
"Cheng",
"Jianlin",
""
]
] | SMISS is a novel web server for protein function prediction. Three different predictors can be selected for different usage. It integrates different sources to improve the protein function prediction accuracy, including the query protein sequence, protein-protein interaction network, gene-gene interaction network, and the rules mined from protein function associations. SMISS automatically switch to ab initio protein function prediction based on the query sequence when there is no homologs in the database. It takes fasta format sequences as input, and several sequences can submit together without influencing the computation speed too much. PHP and Perl are two primary programming language used in the server. The CodeIgniter MVC PHP web framework and Bootstrap front-end framework are used for building the server. It can be used in different platforms in standard web browser, such as Windows, Mac OS X, Linux, and iOS. No plugins are needed for our website. Availability: http://tulip.rnet.missouri.edu/profunc/. |
2010.08117 | Y. Charles Li | Z. C. Feng and Y. Charles Li | Enrichment paradox and applications | null | null | null | null | q-bio.PE nlin.CD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduced a more general predator-prey model to analyze the paradox of
enrichment. We hope the results obtained for the model can guide us on
identifying real field paradox of enrichment.
| [
{
"created": "Fri, 16 Oct 2020 02:41:45 GMT",
"version": "v1"
}
] | 2020-10-19 | [
[
"Feng",
"Z. C.",
""
],
[
"Li",
"Y. Charles",
""
]
] | We introduced a more general predator-prey model to analyze the paradox of enrichment. We hope the results obtained for the model can guide us on identifying real field paradox of enrichment. |
1306.1938 | Joachim Krug | Benjamin Schmiegelt, Joachim Krug | Evolutionary accessibility of modular fitness landscapes | 26 pages, 12 figures; final version with some typos corrected | Journal of Statistical Physics 154:334-355 (2014) | 10.1007/s10955-013-0868-8 | null | q-bio.PE cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A fitness landscape is a mapping from the space of genetic sequences, which
is modeled here as a binary hypercube of dimension $L$, to the real numbers. We
consider random models of fitness landscapes, where fitness values are assigned
according to some probabilistic rule, and study the statistical properties of
pathways to the global fitness maximum along which fitness increases
monotonically. Such paths are important for evolution because they are the only
ones that are accessible to an adapting population when mutations occur at a
low rate. The focus of this work is on the block model introduced by A.S.
Perelson and C.A. Macken [Proc. Natl. Acad. Sci. USA 92:9657 (1995)] where the
genome is decomposed into disjoint sets of loci (`modules') that contribute
independently to fitness, and fitness values within blocks are assigned at
random. We show that the number of accessible paths can be written as a product
of the path numbers within the blocks, which provides a detailed analytic
description of the path statistics. The block model can be viewed as a special
case of Kauffman's NK-model, and we compare the analytic results to simulations
of the NK-model with different genetic architectures. We find that the mean
number of accessible paths in the different versions of the model are quite
similar, but the distribution of the path number is qualitatively different in
the block model due to its multiplicative structure. A similar statement
applies to the number of local fitness maxima in the NK-models, which has been
studied extensively in previous works. The overall evolutionary accessibility
of the landscape, as quantified by the probability to find at least one
accessible path to the global maximum, is dramatically lowered by the modular
structure.
| [
{
"created": "Sat, 8 Jun 2013 16:02:58 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Oct 2013 10:14:46 GMT",
"version": "v2"
},
{
"created": "Tue, 4 Feb 2014 12:45:06 GMT",
"version": "v3"
}
] | 2015-06-16 | [
[
"Schmiegelt",
"Benjamin",
""
],
[
"Krug",
"Joachim",
""
]
] | A fitness landscape is a mapping from the space of genetic sequences, which is modeled here as a binary hypercube of dimension $L$, to the real numbers. We consider random models of fitness landscapes, where fitness values are assigned according to some probabilistic rule, and study the statistical properties of pathways to the global fitness maximum along which fitness increases monotonically. Such paths are important for evolution because they are the only ones that are accessible to an adapting population when mutations occur at a low rate. The focus of this work is on the block model introduced by A.S. Perelson and C.A. Macken [Proc. Natl. Acad. Sci. USA 92:9657 (1995)] where the genome is decomposed into disjoint sets of loci (`modules') that contribute independently to fitness, and fitness values within blocks are assigned at random. We show that the number of accessible paths can be written as a product of the path numbers within the blocks, which provides a detailed analytic description of the path statistics. The block model can be viewed as a special case of Kauffman's NK-model, and we compare the analytic results to simulations of the NK-model with different genetic architectures. We find that the mean number of accessible paths in the different versions of the model are quite similar, but the distribution of the path number is qualitatively different in the block model due to its multiplicative structure. A similar statement applies to the number of local fitness maxima in the NK-models, which has been studied extensively in previous works. The overall evolutionary accessibility of the landscape, as quantified by the probability to find at least one accessible path to the global maximum, is dramatically lowered by the modular structure. |
2406.10921 | Anthony Mays | Chris Mays, Marcos Amores, Anthony Mays | Improved absolute abundance estimates from spatial count data with
simulation and microfossil case studies | 103 pages, of which 67 pages constitutes the main text, and the
remaining 36 pages are Supporting Information. 9 figures. The code for the
related simulations can be found via a link in the manuscript | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many fundamental parameters of biological systems -- eg. productivity,
population sizes and biomass -- are most effectively expressed in absolute
terms. In contrast to proportional data (eg. percentages), absolute values
provide standardised metrics on the functioning of biological entities (eg.
organisism, species, ecosystems). These are particularly valuable when
comparing assemblages across time and space. Since it is almost always
impractical to count entire populations, estimates of population abundances
require a sampling method that is both accurate and precise. Such absolute
abundance estimates typically entail more "sampling effort" (data collection
time) than proportional data. Here we refined a method of absolute abundance
estimates -- the "exotic marker technique" -- by producing a variant that is
more efficient without losing accuracy. This new method, the "field-of-view
subsampling method" (FOVS method) is based on area subsampling, from which
large samples can be quickly extrapolated.
Two case studies of the exotic marker technique were employed: 1, computer
simulations; and 2, an observational "real world" data set of terrestrial
organic microfossils from the Permian- and Triassic-aged rock strata of
southeastern Australia, spiked with marker grains of known quantity and
variance. We compared the FOVS method against the traditional "linear method"
method using three metrics: 1, concentration (specimens/gram of sediment); 2,
precision and 3, data collection effort. In almost all cases, the FOVS method
delivers higher precision than the linear method, with equivalent effort, and
our computer simulations suggest that the FOVS method more accurately estimates
the true error for large target-to-marker ratios. Since we predict that these
conditions are typically common, we recommend the new FOVS method in almost
every "real world" case.
| [
{
"created": "Sun, 16 Jun 2024 12:54:41 GMT",
"version": "v1"
}
] | 2024-06-18 | [
[
"Mays",
"Chris",
""
],
[
"Amores",
"Marcos",
""
],
[
"Mays",
"Anthony",
""
]
] | Many fundamental parameters of biological systems -- eg. productivity, population sizes and biomass -- are most effectively expressed in absolute terms. In contrast to proportional data (eg. percentages), absolute values provide standardised metrics on the functioning of biological entities (eg. organisism, species, ecosystems). These are particularly valuable when comparing assemblages across time and space. Since it is almost always impractical to count entire populations, estimates of population abundances require a sampling method that is both accurate and precise. Such absolute abundance estimates typically entail more "sampling effort" (data collection time) than proportional data. Here we refined a method of absolute abundance estimates -- the "exotic marker technique" -- by producing a variant that is more efficient without losing accuracy. This new method, the "field-of-view subsampling method" (FOVS method) is based on area subsampling, from which large samples can be quickly extrapolated. Two case studies of the exotic marker technique were employed: 1, computer simulations; and 2, an observational "real world" data set of terrestrial organic microfossils from the Permian- and Triassic-aged rock strata of southeastern Australia, spiked with marker grains of known quantity and variance. We compared the FOVS method against the traditional "linear method" method using three metrics: 1, concentration (specimens/gram of sediment); 2, precision and 3, data collection effort. In almost all cases, the FOVS method delivers higher precision than the linear method, with equivalent effort, and our computer simulations suggest that the FOVS method more accurately estimates the true error for large target-to-marker ratios. Since we predict that these conditions are typically common, we recommend the new FOVS method in almost every "real world" case. |
0811.3645 | Bryan Daniels | Bryan C. Daniels, Scott Forth, Maxim Y. Sheinin, Michelle D. Wang, and
James P. Sethna | Discontinuities at the DNA supercoiling transition | 11 pages, 5 figures; revised version, with added supplemental
material | null | 10.1103/PhysRevE.80.040901 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While slowly turning the ends of a single molecule of DNA at constant applied
force, a discontinuity was recently observed at the supercoiling transition,
when a small plectoneme is suddenly formed. This can be understood as an abrupt
transition into a state in which stretched and plectonemic DNA coexist. We
argue that there should be discontinuities in both the extension and the torque
at the transition, and provide experimental evidence for both. To predict the
sizes of these discontinuities and how they change with the overall length of
DNA, we organize a theory for the coexisting plectonemic state in terms of four
length-independent parameters. We also test plectoneme theories, including our
own elastic rod simulation, finding discrepancies with experiment that can be
understood in terms of the four coexisting state parameters.
| [
{
"created": "Fri, 21 Nov 2008 22:37:08 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Jul 2009 16:24:48 GMT",
"version": "v2"
}
] | 2011-03-15 | [
[
"Daniels",
"Bryan C.",
""
],
[
"Forth",
"Scott",
""
],
[
"Sheinin",
"Maxim Y.",
""
],
[
"Wang",
"Michelle D.",
""
],
[
"Sethna",
"James P.",
""
]
] | While slowly turning the ends of a single molecule of DNA at constant applied force, a discontinuity was recently observed at the supercoiling transition, when a small plectoneme is suddenly formed. This can be understood as an abrupt transition into a state in which stretched and plectonemic DNA coexist. We argue that there should be discontinuities in both the extension and the torque at the transition, and provide experimental evidence for both. To predict the sizes of these discontinuities and how they change with the overall length of DNA, we organize a theory for the coexisting plectonemic state in terms of four length-independent parameters. We also test plectoneme theories, including our own elastic rod simulation, finding discrepancies with experiment that can be understood in terms of the four coexisting state parameters. |
1702.01265 | H\'el\`ene Bouvrais | H\'el\`ene Bouvrais, Laurent Chesneau, Sylvain Pastezeur, Marie
Delattre, Jacques P\'ecr\'eaux | LET-99-dependent spatial restriction of active force generators makes
spindle's position robust | null | null | null | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | During the asymmetric division of the Caenorhabditis elegans nematode zygote,
the polarity cues distribution and daughter cell fates depend on the correct
positioning of the mitotic spindle, which results from both centering and
cortical pulling forces. Revealed by anaphase spindle rocking, these pulling
forces are regulated by the force generator dynamics, which are in turn
consequent of mitotic progression. We found a novel, additional, regulation of
these forces by the spindle position. It controls astral microtubule
availability at the cortex, on which the active force generators can pull.
Importantly, this positional control relies on the polarity dependent LET-99
cortical band, which restricts or concentrates generators to a posterior
crescent. We ascribed this control to the microtubule dynamics at the cortex.
Indeed, in mapping the cortical contacts, we found a correlation between the
centrosome-cortex distance and the microtubule contact density. In turn, it
modulates pulling force generator activity. We modelled this control,
predicting and experimentally validating that the posterior crescent extent
controlled where the anaphase oscillations started, in addition to mitotic
progression. Finally, we propose that spatially restricting force generator to
a posterior crescent sets the spindle's final position, reflecting polarity
through the LET-99 dependent restriction of force generators to a posterior
crescent. This regulation superimposes that of force generator processivity.
This novel control confers a low dependence on microtubule and active force
generator exact numbers or dynamics, provided that they exceed the threshold
needed for posterior displacement. Interestingly, this robustness originates in
cell mechanics rather than biochemical networks.
| [
{
"created": "Sat, 4 Feb 2017 11:06:25 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Apr 2017 16:02:11 GMT",
"version": "v2"
},
{
"created": "Wed, 24 May 2017 13:50:09 GMT",
"version": "v3"
},
{
"created": "Thu, 5 Apr 2018 14:02:06 GMT",
"version": "v4"
}
] | 2018-04-06 | [
[
"Bouvrais",
"Hélène",
""
],
[
"Chesneau",
"Laurent",
""
],
[
"Pastezeur",
"Sylvain",
""
],
[
"Delattre",
"Marie",
""
],
[
"Pécréaux",
"Jacques",
""
]
] | During the asymmetric division of the Caenorhabditis elegans nematode zygote, the polarity cues distribution and daughter cell fates depend on the correct positioning of the mitotic spindle, which results from both centering and cortical pulling forces. Revealed by anaphase spindle rocking, these pulling forces are regulated by the force generator dynamics, which are in turn consequent of mitotic progression. We found a novel, additional, regulation of these forces by the spindle position. It controls astral microtubule availability at the cortex, on which the active force generators can pull. Importantly, this positional control relies on the polarity dependent LET-99 cortical band, which restricts or concentrates generators to a posterior crescent. We ascribed this control to the microtubule dynamics at the cortex. Indeed, in mapping the cortical contacts, we found a correlation between the centrosome-cortex distance and the microtubule contact density. In turn, it modulates pulling force generator activity. We modelled this control, predicting and experimentally validating that the posterior crescent extent controlled where the anaphase oscillations started, in addition to mitotic progression. Finally, we propose that spatially restricting force generator to a posterior crescent sets the spindle's final position, reflecting polarity through the LET-99 dependent restriction of force generators to a posterior crescent. This regulation superimposes that of force generator processivity. This novel control confers a low dependence on microtubule and active force generator exact numbers or dynamics, provided that they exceed the threshold needed for posterior displacement. Interestingly, this robustness originates in cell mechanics rather than biochemical networks. |
2402.14641 | Justin Wood | Justin N. Wood, Tomer D. Ullman, Brian W. Wood, Elizabeth S. Spelke,
Samantha M. W. Wood | Object permanence in newborn chicks is robust against opposing evidence | 10 pages, 4 figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Newborn animals have advanced perceptual skills at birth, but the nature of
this initial knowledge is unknown. Is initial knowledge flexible, continuously
adapting to the statistics of experience? Or can initial knowledge be rigid and
robust to change, even in the face of opposing evidence? We address this
question through controlled-rearing experiments on newborn chicks. First, we
reared chicks in an impoverished virtual world, where objects never occluded
one another, and found that chicks still succeed on object permanence tasks.
Second, we reared chicks in a virtual world in which objects teleported from
one location to another while out of view: an unnatural event that violates the
continuity of object motion. Despite seeing thousands of these violations of
object permanence, and not a single non-violation, the chicks behaved as if
object permanence were true, exhibiting the same behavior as chicks reared with
natural object permanence events. We conclude that object permanence develops
prenatally and is robust to change from opposing evidence.
| [
{
"created": "Thu, 22 Feb 2024 15:39:29 GMT",
"version": "v1"
}
] | 2024-02-23 | [
[
"Wood",
"Justin N.",
""
],
[
"Ullman",
"Tomer D.",
""
],
[
"Wood",
"Brian W.",
""
],
[
"Spelke",
"Elizabeth S.",
""
],
[
"Wood",
"Samantha M. W.",
""
]
] | Newborn animals have advanced perceptual skills at birth, but the nature of this initial knowledge is unknown. Is initial knowledge flexible, continuously adapting to the statistics of experience? Or can initial knowledge be rigid and robust to change, even in the face of opposing evidence? We address this question through controlled-rearing experiments on newborn chicks. First, we reared chicks in an impoverished virtual world, where objects never occluded one another, and found that chicks still succeed on object permanence tasks. Second, we reared chicks in a virtual world in which objects teleported from one location to another while out of view: an unnatural event that violates the continuity of object motion. Despite seeing thousands of these violations of object permanence, and not a single non-violation, the chicks behaved as if object permanence were true, exhibiting the same behavior as chicks reared with natural object permanence events. We conclude that object permanence develops prenatally and is robust to change from opposing evidence. |
1403.5148 | H Frost | H. Robert Frost, Zhigang Li and Jason H. Moore | Principal component gene set enrichment (PCGSE) | null | BioData Mining 2015, 8:25 | 10.1186/s13040-015-0059-z | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: Although principal component analysis (PCA) is widely used for
the dimensional reduction of biomedical data, interpretation of PCA results
remains daunting. Most existing methods attempt to explain each principal
component (PC) in terms of a small number of variables by generating
approximate PCs with few non-zero loadings. Although useful when just a few
variables dominate the population PCs, these methods are often inadequate for
characterizing the PCs of high-dimensional genomic data. For genomic data,
reproducible and biologically meaningful PC interpretation requires methods
based on the combined signal of functionally related sets of genes. While gene
set testing methods have been widely used in supervised settings to quantify
the association of groups of genes with clinical outcomes, these methods have
seen only limited application for testing the enrichment of gene sets relative
to sample PCs. Results: We describe a novel approach, principal component gene
set enrichment (PCGSE), for computing the statistical association between gene
sets and the PCs of genomic data. The PCGSE method performs a two-stage
competitive gene set test using the correlation between each gene and each PC
as the gene-level test statistic with flexible choice of both the gene set test
statistic and the method used to compute the null distribution of the gene set
statistic. Using simulated data with simulated gene sets and real gene
expression data with curated gene sets, we demonstrate that biologically
meaningful and computationally efficient results can be obtained from a simple
parametric version of the PCGSE method that performs a correlation-adjusted
two-sample t-test between the gene-level test statistics for gene set members
and genes not in the set. Availability:
http://cran.r-project.org/web/packages/PCGSE/index.html Contact:
rob.frost@dartmouth.edu or jason.h.moore@dartmouth.edu
| [
{
"created": "Thu, 20 Mar 2014 14:37:22 GMT",
"version": "v1"
}
] | 2015-08-24 | [
[
"Frost",
"H. Robert",
""
],
[
"Li",
"Zhigang",
""
],
[
"Moore",
"Jason H.",
""
]
] | Motivation: Although principal component analysis (PCA) is widely used for the dimensional reduction of biomedical data, interpretation of PCA results remains daunting. Most existing methods attempt to explain each principal component (PC) in terms of a small number of variables by generating approximate PCs with few non-zero loadings. Although useful when just a few variables dominate the population PCs, these methods are often inadequate for characterizing the PCs of high-dimensional genomic data. For genomic data, reproducible and biologically meaningful PC interpretation requires methods based on the combined signal of functionally related sets of genes. While gene set testing methods have been widely used in supervised settings to quantify the association of groups of genes with clinical outcomes, these methods have seen only limited application for testing the enrichment of gene sets relative to sample PCs. Results: We describe a novel approach, principal component gene set enrichment (PCGSE), for computing the statistical association between gene sets and the PCs of genomic data. The PCGSE method performs a two-stage competitive gene set test using the correlation between each gene and each PC as the gene-level test statistic with flexible choice of both the gene set test statistic and the method used to compute the null distribution of the gene set statistic. Using simulated data with simulated gene sets and real gene expression data with curated gene sets, we demonstrate that biologically meaningful and computationally efficient results can be obtained from a simple parametric version of the PCGSE method that performs a correlation-adjusted two-sample t-test between the gene-level test statistics for gene set members and genes not in the set. Availability: http://cran.r-project.org/web/packages/PCGSE/index.html Contact: rob.frost@dartmouth.edu or jason.h.moore@dartmouth.edu |
2309.10385 | Kumar Neelabh | Kumar Neelabh, Vishnu Sreekumar | From sound to meaning in the auditory cortex: A neuronal representation
and classification analysis | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The neural mechanisms underlying the comprehension of meaningful sounds are
yet to be fully understood. While previous research has shown that the auditory
cortex can classify auditory stimuli into distinct semantic categories, the
specific contributions of the primary (A1) and the secondary auditory cortex
(A2) to this process are not well understood. We used songbirds as a model
species, and analyzed their neural responses as they listened to their entire
vocal repertoire (\(\sim \)10 types of vocalizations). We first demonstrate
that the distances between the call types in the neural representation spaces
of A1 and A2 are correlated with their respective distances in the acoustic
feature space. Then, we show that while the neural activity in both A1 and A2
is equally informative of the acoustic category of the vocalizations, A2 is
significantly more informative of the semantic category of those vocalizations.
Additionally, we show that the semantic categories are more separated in A2.
These findings suggest that as the incoming signal moves downstream within the
auditory cortex, its acoustic information is preserved, whereas its semantic
information is enhanced.
| [
{
"created": "Tue, 19 Sep 2023 07:36:53 GMT",
"version": "v1"
}
] | 2023-09-20 | [
[
"Neelabh",
"Kumar",
""
],
[
"Sreekumar",
"Vishnu",
""
]
] | The neural mechanisms underlying the comprehension of meaningful sounds are yet to be fully understood. While previous research has shown that the auditory cortex can classify auditory stimuli into distinct semantic categories, the specific contributions of the primary (A1) and the secondary auditory cortex (A2) to this process are not well understood. We used songbirds as a model species, and analyzed their neural responses as they listened to their entire vocal repertoire (\(\sim \)10 types of vocalizations). We first demonstrate that the distances between the call types in the neural representation spaces of A1 and A2 are correlated with their respective distances in the acoustic feature space. Then, we show that while the neural activity in both A1 and A2 is equally informative of the acoustic category of the vocalizations, A2 is significantly more informative of the semantic category of those vocalizations. Additionally, we show that the semantic categories are more separated in A2. These findings suggest that as the incoming signal moves downstream within the auditory cortex, its acoustic information is preserved, whereas its semantic information is enhanced. |
1411.5340 | Michelle Greene | Michelle R. Greene, Christopher Baldassano, Andre Esteva, Diane M.
Beck and Li Fei-Fei | Affordances Provide a Fundamental Categorization Principle for Visual
Scenes | null | null | null | null | q-bio.NC cs.CV cs.HC | http://creativecommons.org/licenses/by-nc-sa/3.0/ | How do we know that a kitchen is a kitchen by looking? Relatively little is
known about how we conceptualize and categorize different visual environments.
Traditional models of visual perception posit that scene categorization is
achieved through the recognition of a scene's objects, yet these models cannot
account for the mounting evidence that human observers are relatively
insensitive to the local details in an image. Psychologists have long theorized
that the affordances, or actionable possibilities of a stimulus are pivotal to
its perception. To what extent are scene categories created from similar
affordances? Using a large-scale experiment using hundreds of scene categories,
we show that the activities afforded by a visual scene provide a fundamental
categorization principle. Affordance-based similarity explained the majority of
the structure in the human scene categorization patterns, outperforming
alternative similarities based on objects or visual features. We all models
were combined, affordances provided the majority of the predictive power in the
combined model, and nearly half of the total explained variance is captured
only by affordances. These results challenge many existing models of high-level
visual perception, and provide immediately testable hypotheses for the
functional organization of the human perceptual system.
| [
{
"created": "Wed, 19 Nov 2014 19:58:59 GMT",
"version": "v1"
}
] | 2014-11-20 | [
[
"Greene",
"Michelle R.",
""
],
[
"Baldassano",
"Christopher",
""
],
[
"Esteva",
"Andre",
""
],
[
"Beck",
"Diane M.",
""
],
[
"Fei-Fei",
"Li",
""
]
] | How do we know that a kitchen is a kitchen by looking? Relatively little is known about how we conceptualize and categorize different visual environments. Traditional models of visual perception posit that scene categorization is achieved through the recognition of a scene's objects, yet these models cannot account for the mounting evidence that human observers are relatively insensitive to the local details in an image. Psychologists have long theorized that the affordances, or actionable possibilities of a stimulus are pivotal to its perception. To what extent are scene categories created from similar affordances? Using a large-scale experiment using hundreds of scene categories, we show that the activities afforded by a visual scene provide a fundamental categorization principle. Affordance-based similarity explained the majority of the structure in the human scene categorization patterns, outperforming alternative similarities based on objects or visual features. We all models were combined, affordances provided the majority of the predictive power in the combined model, and nearly half of the total explained variance is captured only by affordances. These results challenge many existing models of high-level visual perception, and provide immediately testable hypotheses for the functional organization of the human perceptual system. |
1811.11414 | Mohd Suhail Rizvi | Mohd Suhail Rizvi | Speed-detachment tradeoff and its effect on track bound transport of
single motor protein | null | null | null | null | q-bio.SC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The transportation of the cargoes in biological cells is primarily driven by
the motor proteins on filamentous protein tracks. The stochastic nature of the
motion of motor protein often leads to its spontaneous detachment from the
track. Using the available experimental data, we demonstrate a tradeoff between
the speed of the motor and its rate of spontaneous detachment from the track.
Further, it is also shown that this speed-detachment relation follows a power
law where its exponent dictates the nature of the motor protein processivity.
We utilize this information to study the motion of motor protein on track using
a random-walk model. We obtain the average distance travelled in fixed duration
and average time required for covering a given distance by the motor protein.
These analyses reveal non-monotonic dependence of the motor protein speed on
its transport and, therefore, optimal motor speeds can be identified for the
time and distance controlled conditions.
| [
{
"created": "Wed, 28 Nov 2018 07:14:44 GMT",
"version": "v1"
}
] | 2018-11-29 | [
[
"Rizvi",
"Mohd Suhail",
""
]
] | The transportation of the cargoes in biological cells is primarily driven by the motor proteins on filamentous protein tracks. The stochastic nature of the motion of motor protein often leads to its spontaneous detachment from the track. Using the available experimental data, we demonstrate a tradeoff between the speed of the motor and its rate of spontaneous detachment from the track. Further, it is also shown that this speed-detachment relation follows a power law where its exponent dictates the nature of the motor protein processivity. We utilize this information to study the motion of motor protein on track using a random-walk model. We obtain the average distance travelled in fixed duration and average time required for covering a given distance by the motor protein. These analyses reveal non-monotonic dependence of the motor protein speed on its transport and, therefore, optimal motor speeds can be identified for the time and distance controlled conditions. |
2104.08961 | Fangfang Xia | Fangfang Xia, Jonathan Allen, Prasanna Balaprakash, Thomas Brettin,
Cristina Garcia-Cardona, Austin Clyde, Judith Cohn, James Doroshow, Xiaotian
Duan, Veronika Dubinkina, Yvonne Evrard, Ya Ju Fan, Jason Gans, Stewart He,
Pinyi Lu, Sergei Maslov, Alexander Partin, Maulik Shukla, Eric Stahlberg,
Justin M. Wozniak, Hyunseung Yoo, George Zaki, Yitan Zhu, Rick Stevens | A cross-study analysis of drug response prediction in cancer cell lines | Accepted by Briefings in Bioinformatics | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To enable personalized cancer treatment, machine learning models have been
developed to predict drug response as a function of tumor and drug features.
However, most algorithm development efforts have relied on cross validation
within a single study to assess model accuracy. While an essential first step,
cross validation within a biological data set typically provides an overly
optimistic estimate of the prediction performance on independent test sets. To
provide a more rigorous assessment of model generalizability between different
studies, we use machine learning to analyze five publicly available cell
line-based data sets: NCI60, CTRP, GDSC, CCLE and gCSI. Based on observed
experimental variability across studies, we explore estimates of prediction
upper bounds. We report performance results of a variety of machine learning
models, with a multitasking deep neural network achieving the best cross-study
generalizability. By multiple measures, models trained on CTRP yield the most
accurate predictions on the remaining testing data, and gCSI is the most
predictable among the cell line data sets included in this study. With these
experiments and further simulations on partial data, two lessons emerge: (1)
differences in viability assays can limit model generalizability across
studies, and (2) drug diversity, more than tumor diversity, is crucial for
raising model generalizability in preclinical screening.
| [
{
"created": "Sun, 18 Apr 2021 21:40:51 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Aug 2021 19:31:52 GMT",
"version": "v2"
}
] | 2021-08-17 | [
[
"Xia",
"Fangfang",
""
],
[
"Allen",
"Jonathan",
""
],
[
"Balaprakash",
"Prasanna",
""
],
[
"Brettin",
"Thomas",
""
],
[
"Garcia-Cardona",
"Cristina",
""
],
[
"Clyde",
"Austin",
""
],
[
"Cohn",
"Judith",
""
... | To enable personalized cancer treatment, machine learning models have been developed to predict drug response as a function of tumor and drug features. However, most algorithm development efforts have relied on cross validation within a single study to assess model accuracy. While an essential first step, cross validation within a biological data set typically provides an overly optimistic estimate of the prediction performance on independent test sets. To provide a more rigorous assessment of model generalizability between different studies, we use machine learning to analyze five publicly available cell line-based data sets: NCI60, CTRP, GDSC, CCLE and gCSI. Based on observed experimental variability across studies, we explore estimates of prediction upper bounds. We report performance results of a variety of machine learning models, with a multitasking deep neural network achieving the best cross-study generalizability. By multiple measures, models trained on CTRP yield the most accurate predictions on the remaining testing data, and gCSI is the most predictable among the cell line data sets included in this study. With these experiments and further simulations on partial data, two lessons emerge: (1) differences in viability assays can limit model generalizability across studies, and (2) drug diversity, more than tumor diversity, is crucial for raising model generalizability in preclinical screening. |
2110.01096 | Higor S. Monteiro | Higor S. Monteiro and Ian Leifer and Saulo D. S. Reis and Jos\'e S.
Andrade, Jr. and Hernan A. Makse | Fast algorithm to identify cluster synchrony through fibration
symmetries in large information-processing networks | 13 pages, 7 figures | null | 10.1063/5.0066741 | null | q-bio.MN cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent studies revealed an important interplay between the detailed structure
of fibration symmetric circuits and the functionality of biological and
non-biological networks within which they have be identified. The presence of
these circuits in complex networks are directed related to the phenomenon of
cluster synchronization, which produces patterns of synchronized group of
nodes. Here we present a fast, and memory efficient, algorithm to identify
fibration symmetries over information-processing networks. This algorithm is
specially suitable for large and sparse networks since it has runtime of
complexity $O(M\log N)$ and requires $O(M+N)$ of memory resources, where $N$
and $M$ are the number of nodes and edges in the network, respectively. We
propose a modification on the so-called refinement paradigm to identify
circuits symmetrical to information flow (i.e., fibers) by finding the coarsest
refinement partition over the network. Finally, we show that the presented
algorithm provides an optimal procedure for identifying fibers, overcoming the
current approaches used in the literature.
| [
{
"created": "Sun, 3 Oct 2021 20:24:52 GMT",
"version": "v1"
},
{
"created": "Sun, 10 Oct 2021 21:00:43 GMT",
"version": "v2"
}
] | 2022-04-06 | [
[
"Monteiro",
"Higor S.",
""
],
[
"Leifer",
"Ian",
""
],
[
"Reis",
"Saulo D. S.",
""
],
[
"Andrade,",
"José S.",
"Jr."
],
[
"Makse",
"Hernan A.",
""
]
] | Recent studies revealed an important interplay between the detailed structure of fibration symmetric circuits and the functionality of biological and non-biological networks within which they have be identified. The presence of these circuits in complex networks are directed related to the phenomenon of cluster synchronization, which produces patterns of synchronized group of nodes. Here we present a fast, and memory efficient, algorithm to identify fibration symmetries over information-processing networks. This algorithm is specially suitable for large and sparse networks since it has runtime of complexity $O(M\log N)$ and requires $O(M+N)$ of memory resources, where $N$ and $M$ are the number of nodes and edges in the network, respectively. We propose a modification on the so-called refinement paradigm to identify circuits symmetrical to information flow (i.e., fibers) by finding the coarsest refinement partition over the network. Finally, we show that the presented algorithm provides an optimal procedure for identifying fibers, overcoming the current approaches used in the literature. |
q-bio/0510025 | Fernand Hayot | F. Hayot and C. Jayaprakash | NF-kappa B oscillations and cell-to-cell variability | 14 pages, 9 figures, 1 table | null | null | null | q-bio.QM q-bio.CB | null | Oscillations of the transcription factor NF-kappa B have been observed
experimentally both for cell populations by Hoffmann et al. (2002) and for
single cells by Nelson et al. (2004). The latter experiments show significant
cell-to-cell variability. We use the (stochastic) Gillespie algorithm to study
the set of reactions proposed by Hoffmann et al. and variants thereof. The
amounts of cellular NF-kappa B and activated kinase IKK are treated as external
parameters. We show that intrinsic fluctuations are small in a model with
strong transcription, while they are large for weak transcription. Extrinsic
noise can be significant: fluctuations in NF-kappa B affect mainly the
amplitude of oscillations, whereas fluctuations in activated IKK affect both
their amplitude and period. The latter results are in qualitative agreement
with the observations of Nelson et al.
| [
{
"created": "Thu, 13 Oct 2005 16:53:32 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Hayot",
"F.",
""
],
[
"Jayaprakash",
"C.",
""
]
] | Oscillations of the transcription factor NF-kappa B have been observed experimentally both for cell populations by Hoffmann et al. (2002) and for single cells by Nelson et al. (2004). The latter experiments show significant cell-to-cell variability. We use the (stochastic) Gillespie algorithm to study the set of reactions proposed by Hoffmann et al. and variants thereof. The amounts of cellular NF-kappa B and activated kinase IKK are treated as external parameters. We show that intrinsic fluctuations are small in a model with strong transcription, while they are large for weak transcription. Extrinsic noise can be significant: fluctuations in NF-kappa B affect mainly the amplitude of oscillations, whereas fluctuations in activated IKK affect both their amplitude and period. The latter results are in qualitative agreement with the observations of Nelson et al. |
2309.04881 | Chen Huang | Chen Huang, Judy S. Kim, Angus I. Kirkland | Cryo-Electron Ptychography: Applications and Potential in Biological
Characterisation | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | There is a clear need for developments in characterisation techniques that
provide detailed information about structure-function relationships in biology.
Using electron microscopy to achieve high resolution while maintaining a broad
field of view remains a challenge, particularly for radiation sensitive
specimens where the signal-to-noise ratio required to maintain structural
integrity is limited by low electron fluence. In this review, we explore the
potential of cryogenic electron ptychography as an alternative method for
characterisation of biological systems under low fluence conditions. Using this
method with increased information content from multiple sampled regions of
interest, potentially allows 3D reconstruction with far fewer particles than
required in conventional cryo-electron microscopy. This is important for
achieving higher resolution for systems where distributions of homogeneous
single particles are difficult to obtain. We discuss the progress, limitations
and potential areas for future development of this approach for both single
particle analysis and in applications to heterogeneous large objects.
| [
{
"created": "Sat, 9 Sep 2023 21:57:49 GMT",
"version": "v1"
}
] | 2023-09-12 | [
[
"Huang",
"Chen",
""
],
[
"Kim",
"Judy S.",
""
],
[
"Kirkland",
"Angus I.",
""
]
] | There is a clear need for developments in characterisation techniques that provide detailed information about structure-function relationships in biology. Using electron microscopy to achieve high resolution while maintaining a broad field of view remains a challenge, particularly for radiation sensitive specimens where the signal-to-noise ratio required to maintain structural integrity is limited by low electron fluence. In this review, we explore the potential of cryogenic electron ptychography as an alternative method for characterisation of biological systems under low fluence conditions. Using this method with increased information content from multiple sampled regions of interest, potentially allows 3D reconstruction with far fewer particles than required in conventional cryo-electron microscopy. This is important for achieving higher resolution for systems where distributions of homogeneous single particles are difficult to obtain. We discuss the progress, limitations and potential areas for future development of this approach for both single particle analysis and in applications to heterogeneous large objects. |
1101.4898 | Eivind T{\o}stesen | Geir K. Sandve, Sveinung Gundersen, Halfdan Rydbeck, Ingrid K. Glad,
Lars Holden, Marit Holden, Knut Liest{\o}l, Trevor Clancy, Egil Ferkingstad,
Morten Johansen, Vegard Nygaard, Eivind T{\o}stesen, Arnoldo Frigessi and
Eivind Hovig | The Genomic HyperBrowser: inferential genomics at the sequence level | null | Genome Biology 2010, 11:R121 | 10.1186/gb-2010-11-12-r121 | null | q-bio.GN | http://creativecommons.org/licenses/by/3.0/ | The immense increase in the generation of genomic scale data poses an unmet
analytical challenge, due to a lack of established methodology with the
required flexibility and power. We propose a first principled approach to
statistical analysis of sequence-level genomic information. We provide a
growing collection of generic biological investigations that query pairwise
relations between tracks, represented as mathematical objects, along the
genome. The Genomic HyperBrowser implements the approach and is available at
http://hyperbrowser.uio.no.
| [
{
"created": "Tue, 25 Jan 2011 18:57:24 GMT",
"version": "v1"
}
] | 2011-01-26 | [
[
"Sandve",
"Geir K.",
""
],
[
"Gundersen",
"Sveinung",
""
],
[
"Rydbeck",
"Halfdan",
""
],
[
"Glad",
"Ingrid K.",
""
],
[
"Holden",
"Lars",
""
],
[
"Holden",
"Marit",
""
],
[
"Liestøl",
"Knut",
""
],
[
... | The immense increase in the generation of genomic scale data poses an unmet analytical challenge, due to a lack of established methodology with the required flexibility and power. We propose a first principled approach to statistical analysis of sequence-level genomic information. We provide a growing collection of generic biological investigations that query pairwise relations between tracks, represented as mathematical objects, along the genome. The Genomic HyperBrowser implements the approach and is available at http://hyperbrowser.uio.no. |
2302.02650 | Jeyashree Krishnan | Jeyashree Krishnan, Zeyu Lian, Pieter E. Oomen, Xiulan He, Soodabeh
Majdi, Andreas Schuppert, Andrew Ewing | Tree-Based Learning on Amperometric Time Series Data Demonstrates High
Accuracy for Classification | 56 pages, 11 figures | null | null | null | q-bio.SC cs.LG q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Elucidating exocytosis processes provide insights into cellular
neurotransmission mechanisms, and may have potential in neurodegenerative
diseases research. Amperometry is an established electrochemical method for the
detection of neurotransmitters released from and stored inside cells. An
important aspect of the amperometry method is the sub-millisecond temporal
resolution of the current recordings which leads to several hundreds of
gigabytes of high-quality data. In this study, we present a universal method
for the classification with respect to diverse amperometric datasets using
data-driven approaches in computational science. We demonstrate a very high
prediction accuracy (greater than or equal to 95%). This includes an end-to-end
systematic machine learning workflow for amperometric time series datasets
consisting of pre-processing; feature extraction; model identification;
training and testing; followed by feature importance evaluation - all
implemented. We tested the method on heterogeneous amperometric time series
datasets generated using different experimental approaches, chemical
stimulations, electrode types, and varying recording times. We identified a
certain overarching set of common features across these datasets which enables
accurate predictions. Further, we showed that information relevant for the
classification of amperometric traces are neither in the spiky segments alone,
nor can it be retrieved from just the temporal structure of spikes. In fact,
the transients between spikes and the trace baselines carry essential
information for a successful classification, thereby strongly demonstrating
that an effective feature representation of amperometric time series requires
the full time series. To our knowledge, this is one of the first studies that
propose a scheme for machine learning, and in particular, supervised learning
on full amperometry time series data.
| [
{
"created": "Mon, 6 Feb 2023 09:44:53 GMT",
"version": "v1"
}
] | 2023-02-07 | [
[
"Krishnan",
"Jeyashree",
""
],
[
"Lian",
"Zeyu",
""
],
[
"Oomen",
"Pieter E.",
""
],
[
"He",
"Xiulan",
""
],
[
"Majdi",
"Soodabeh",
""
],
[
"Schuppert",
"Andreas",
""
],
[
"Ewing",
"Andrew",
""
]
] | Elucidating exocytosis processes provide insights into cellular neurotransmission mechanisms, and may have potential in neurodegenerative diseases research. Amperometry is an established electrochemical method for the detection of neurotransmitters released from and stored inside cells. An important aspect of the amperometry method is the sub-millisecond temporal resolution of the current recordings which leads to several hundreds of gigabytes of high-quality data. In this study, we present a universal method for the classification with respect to diverse amperometric datasets using data-driven approaches in computational science. We demonstrate a very high prediction accuracy (greater than or equal to 95%). This includes an end-to-end systematic machine learning workflow for amperometric time series datasets consisting of pre-processing; feature extraction; model identification; training and testing; followed by feature importance evaluation - all implemented. We tested the method on heterogeneous amperometric time series datasets generated using different experimental approaches, chemical stimulations, electrode types, and varying recording times. We identified a certain overarching set of common features across these datasets which enables accurate predictions. Further, we showed that information relevant for the classification of amperometric traces are neither in the spiky segments alone, nor can it be retrieved from just the temporal structure of spikes. In fact, the transients between spikes and the trace baselines carry essential information for a successful classification, thereby strongly demonstrating that an effective feature representation of amperometric time series requires the full time series. To our knowledge, this is one of the first studies that propose a scheme for machine learning, and in particular, supervised learning on full amperometry time series data. |
1906.09835 | Yulia Sandamirskaya | Hajar Asgari, BabakMazloom-Nezhad Maybodi, Raphaela Kreiser, and Yulia
Sandamirskaya | Digital Multiplier-less Event-Driven Spiking Neural Network Architecture
for Learning a Context-Dependent Task | null | null | null | null | q-bio.NC cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neuromorphic engineers aim to develop event-based spiking neural networks
(SNNs) in hardware. These SNNs closer resemble dynamics of biological neurons
than todays' artificial neural networks and achieve higher efficiency thanks to
the event-based, asynchronous nature of processing. Learning in SNNs is more
challenging, however. Since conventional supervised learning methods cannot be
ported on SNNs due to the non-differentiable event-based nature of their
activation, learning in SNNs is currently an active research topic.
Reinforcement learning (RL) is particularly promising method for neuromorphic
implementation, especially in the field of autonomous agents' control, and is
in focus of this work. In particular, in this paper we propose a new digital
multiplier-less hardware implementation of an SNN. We show how this network can
learn stimulus-response associations in a context-dependent task through a RL
mechanism. The task is inspired by biological experiments used to study RL in
animals. The architecture is described using the standard digital design flow
and uses power- and space-efficient cores. We implement the behavioral
experiments using a robot, to show that learning in hardware also works in a
closed sensorimotor loop.
| [
{
"created": "Mon, 24 Jun 2019 10:17:16 GMT",
"version": "v1"
}
] | 2019-06-25 | [
[
"Asgari",
"Hajar",
""
],
[
"Maybodi",
"BabakMazloom-Nezhad",
""
],
[
"Kreiser",
"Raphaela",
""
],
[
"Sandamirskaya",
"Yulia",
""
]
] | Neuromorphic engineers aim to develop event-based spiking neural networks (SNNs) in hardware. These SNNs closer resemble dynamics of biological neurons than todays' artificial neural networks and achieve higher efficiency thanks to the event-based, asynchronous nature of processing. Learning in SNNs is more challenging, however. Since conventional supervised learning methods cannot be ported on SNNs due to the non-differentiable event-based nature of their activation, learning in SNNs is currently an active research topic. Reinforcement learning (RL) is particularly promising method for neuromorphic implementation, especially in the field of autonomous agents' control, and is in focus of this work. In particular, in this paper we propose a new digital multiplier-less hardware implementation of an SNN. We show how this network can learn stimulus-response associations in a context-dependent task through a RL mechanism. The task is inspired by biological experiments used to study RL in animals. The architecture is described using the standard digital design flow and uses power- and space-efficient cores. We implement the behavioral experiments using a robot, to show that learning in hardware also works in a closed sensorimotor loop. |
1510.09037 | Krist\'of Tak\'acs | Krist\'of Tak\'acs | Multiple sequence alignment for short sequences | null | null | null | null | q-bio.QM cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multiple sequence alignment (MSA) has been one of the most important problems
in bioinformatics for more decades and it is still heavily examined by many
mathematicians and biologists. However, mostly because of the practical
motivation of this problem, the research on this topic is focused on aligning
long sequences. It is understandable, since the sequences that need to be
aligned (usually DNA or protein sequences) are generally quite long (e. g., at
least 30-40 characters). Nevertheless, it is a challenging question that
exactly where MSA starts to become a real hard problem (since it is known that
MSA is NP-complete [2]), and the key to answer this question is to examine
short sequences. If the optimal alignment for short sequences could be
determined in polynomial time, then these results may help to develop faster or
more accurate heuristic algorithms for aligning long sequences. In this work,
it is shown that for length-1 sequences using arbitrary metric, as well as for
length-2 sequences using unit metric, the optimum of the MSA problem can be
achieved by the trivial alignment.
| [
{
"created": "Fri, 30 Oct 2015 10:16:44 GMT",
"version": "v1"
},
{
"created": "Sun, 15 Nov 2015 09:58:02 GMT",
"version": "v2"
}
] | 2015-11-17 | [
[
"Takács",
"Kristóf",
""
]
] | Multiple sequence alignment (MSA) has been one of the most important problems in bioinformatics for more decades and it is still heavily examined by many mathematicians and biologists. However, mostly because of the practical motivation of this problem, the research on this topic is focused on aligning long sequences. It is understandable, since the sequences that need to be aligned (usually DNA or protein sequences) are generally quite long (e. g., at least 30-40 characters). Nevertheless, it is a challenging question that exactly where MSA starts to become a real hard problem (since it is known that MSA is NP-complete [2]), and the key to answer this question is to examine short sequences. If the optimal alignment for short sequences could be determined in polynomial time, then these results may help to develop faster or more accurate heuristic algorithms for aligning long sequences. In this work, it is shown that for length-1 sequences using arbitrary metric, as well as for length-2 sequences using unit metric, the optimum of the MSA problem can be achieved by the trivial alignment. |
q-bio/0409002 | Michael C. Mackey | Michael C. Mackey, Chunhua Ou, Laurent Pujo-Menjouet and Jianhong Wu | Periodic Oscillations of Blood Cell Populations in Chronic Myelogenous
Leukemia | 23 pages | null | null | null | q-bio.TO | null | We develop some techniques to prove analytically the existence and stability
of long period oscillations of stem cell populations in the case of periodic
chronic myelogenous leukemia. Such a periodic oscillation $p_\infty $ can be
analytically constructed when the hill coefficient involved in the nonlinear
feedback is infinite, and we show it is possible to obtain a contractive
returning map (for the semiflow defined by the modeling functional differential
equation) in a closed and convex cone containing $p_\infty $ when the hill
coefficient is large, and the fixed point of such a contractive map gives the
long period oscillation previously observed both numerically and
experimentally.
| [
{
"created": "Wed, 1 Sep 2004 01:16:25 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Mackey",
"Michael C.",
""
],
[
"Ou",
"Chunhua",
""
],
[
"Pujo-Menjouet",
"Laurent",
""
],
[
"Wu",
"Jianhong",
""
]
] | We develop some techniques to prove analytically the existence and stability of long period oscillations of stem cell populations in the case of periodic chronic myelogenous leukemia. Such a periodic oscillation $p_\infty $ can be analytically constructed when the hill coefficient involved in the nonlinear feedback is infinite, and we show it is possible to obtain a contractive returning map (for the semiflow defined by the modeling functional differential equation) in a closed and convex cone containing $p_\infty $ when the hill coefficient is large, and the fixed point of such a contractive map gives the long period oscillation previously observed both numerically and experimentally. |
2004.07975 | Uygar S\"umb\"ul | Stephen J. Smith, Michael Hawrylycz, Jean Rossier, Uygar S\"umb\"ul | New Light on Cortical Neuropeptides and Synaptic Network Plasticity | 22 pages, 4 figures, 1 table, to be published in Current Opinion in
Neurobiology | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Neuropeptides, members of a large and evolutionarily ancient family of
proteinaceous cell-cell signaling molecules, are widely recognized as extremely
potent regulators of brain function and behavior. At the cellular level,
neuropeptides are known to act mainly via modulation of ion channel and synapse
function, but functional impacts emerging at the level of complex cortical
synaptic networks have resisted mechanistic analysis. New findings from
single-cell RNA-seq transcriptomics now illuminate intricate patterns of
cortical neuropeptide signaling gene expression and new tools now offer
powerful molecular access to cortical neuropeptide signaling. Here we highlight
some of these new findings and tools, focusing especially on prospects for
experimental and theoretical exploration of peptidergic and synaptic networks
interactions underlying cortical function and plasticity.
| [
{
"created": "Thu, 16 Apr 2020 22:01:34 GMT",
"version": "v1"
}
] | 2020-04-20 | [
[
"Smith",
"Stephen J.",
""
],
[
"Hawrylycz",
"Michael",
""
],
[
"Rossier",
"Jean",
""
],
[
"Sümbül",
"Uygar",
""
]
] | Neuropeptides, members of a large and evolutionarily ancient family of proteinaceous cell-cell signaling molecules, are widely recognized as extremely potent regulators of brain function and behavior. At the cellular level, neuropeptides are known to act mainly via modulation of ion channel and synapse function, but functional impacts emerging at the level of complex cortical synaptic networks have resisted mechanistic analysis. New findings from single-cell RNA-seq transcriptomics now illuminate intricate patterns of cortical neuropeptide signaling gene expression and new tools now offer powerful molecular access to cortical neuropeptide signaling. Here we highlight some of these new findings and tools, focusing especially on prospects for experimental and theoretical exploration of peptidergic and synaptic networks interactions underlying cortical function and plasticity. |
2407.05060 | Nghi Nguyen | Nghi Nguyen, Tao Hou, Enrico Amico, Jingyi Zheng, Huajun Huang, Alan
D. Kaplan, Giovanni Petri, Joaqu\'in Go\~ni, Ralph Kaufmann, Yize Zhao, Duy
Duong-Tran and Li Shen | Volume-optimal persistence homological scaffolds of hemodynamic networks
covary with MEG theta-alpha aperiodic dynamics | The code for our analyses is provided in
https://github.com/ngcaonghi/scaffold_noise | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Higher-order properties of functional magnetic resonance imaging (fMRI)
induced connectivity have been shown to unravel many exclusive topological and
dynamical insights beyond pairwise interactions. Nonetheless, whether these
fMRI-induced higher-order properties play a role in disentangling other
neuroimaging modalities' insights remains largely unexplored and poorly
understood. In this work, by analyzing fMRI data from the Human Connectome
Project Young Adult dataset using persistent homology, we discovered that the
volume-optimal persistence homological scaffolds of fMRI-based functional
connectomes exhibited conservative topological reconfigurations from the
resting state to attentional task-positive state. Specifically, while
reflecting the extent to which each cortical region contributed to functional
cycles following different cognitive demands, these reconfigurations were
constrained such that the spatial distribution of cavities in the connectome is
relatively conserved. Most importantly, such level of contributions covaried
with powers of aperiodic activities mostly within the theta-alpha (4-12 Hz)
band measured by magnetoencephalography (MEG). This comprehensive result
suggests that fMRI-induced hemodynamics and MEG theta-alpha aperiodic
activities are governed by the same functional constraints specific to each
cortical morpho-structure. Methodologically, our work paves the way toward an
innovative computing paradigm in multimodal neuroimaging topological learning.
| [
{
"created": "Sat, 6 Jul 2024 12:14:16 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Jul 2024 22:40:39 GMT",
"version": "v2"
}
] | 2024-07-25 | [
[
"Nguyen",
"Nghi",
""
],
[
"Hou",
"Tao",
""
],
[
"Amico",
"Enrico",
""
],
[
"Zheng",
"Jingyi",
""
],
[
"Huang",
"Huajun",
""
],
[
"Kaplan",
"Alan D.",
""
],
[
"Petri",
"Giovanni",
""
],
[
"Goñi",
... | Higher-order properties of functional magnetic resonance imaging (fMRI) induced connectivity have been shown to unravel many exclusive topological and dynamical insights beyond pairwise interactions. Nonetheless, whether these fMRI-induced higher-order properties play a role in disentangling other neuroimaging modalities' insights remains largely unexplored and poorly understood. In this work, by analyzing fMRI data from the Human Connectome Project Young Adult dataset using persistent homology, we discovered that the volume-optimal persistence homological scaffolds of fMRI-based functional connectomes exhibited conservative topological reconfigurations from the resting state to attentional task-positive state. Specifically, while reflecting the extent to which each cortical region contributed to functional cycles following different cognitive demands, these reconfigurations were constrained such that the spatial distribution of cavities in the connectome is relatively conserved. Most importantly, such level of contributions covaried with powers of aperiodic activities mostly within the theta-alpha (4-12 Hz) band measured by magnetoencephalography (MEG). This comprehensive result suggests that fMRI-induced hemodynamics and MEG theta-alpha aperiodic activities are governed by the same functional constraints specific to each cortical morpho-structure. Methodologically, our work paves the way toward an innovative computing paradigm in multimodal neuroimaging topological learning. |
2205.13143 | Shuhei A. Horiguchi | Shuhei A. Horiguchi, Tetsuya J. Kobayashi | Cellular gradient flow structure connects single-cell-level rules and
population-level dynamics | null | null | null | null | q-bio.PE q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In multicellular systems, the single-cell behaviors should be coordinated
consistently with the overall population dynamics and functions. However, the
interrelation between single-cell rules and the population-level goal is still
elusive. In this work, we reveal that these two levels are naturally connected
via a gradient flow structure of the heterogeneous cellular population and that
biologically prevalent single-cell rules such as unidirectional type-switching
and hierarchical order in types emerge from this structure. We also demonstrate
the gradient flow structure in a standard model of the T-cell immune response.
This theoretical framework works as a basis for understanding multicellular
dynamics and functions.
| [
{
"created": "Thu, 26 May 2022 04:11:25 GMT",
"version": "v1"
}
] | 2022-05-27 | [
[
"Horiguchi",
"Shuhei A.",
""
],
[
"Kobayashi",
"Tetsuya J.",
""
]
] | In multicellular systems, the single-cell behaviors should be coordinated consistently with the overall population dynamics and functions. However, the interrelation between single-cell rules and the population-level goal is still elusive. In this work, we reveal that these two levels are naturally connected via a gradient flow structure of the heterogeneous cellular population and that biologically prevalent single-cell rules such as unidirectional type-switching and hierarchical order in types emerge from this structure. We also demonstrate the gradient flow structure in a standard model of the T-cell immune response. This theoretical framework works as a basis for understanding multicellular dynamics and functions. |
1202.5431 | Bartlomiej Waclaw Dr | Philip Greulich, Bartlomiej Waclaw, and Rosalind J. Allen | Mutational pathway determines whether drug gradients accelerate
evolution of drug-resistant cells | 6 pages, 3 figures, final version before acceptance to Phys. Rev.
Letters. P.G and B.W contributed equally to this work | Phys. Rev. Lett. 109, 088101 (2012) | 10.1103/PhysRevLett.109.088101 | null | q-bio.PE physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Drug gradients are believed to play an important role in the evolution of
bacteria resistant to antibiotics and tumors resistant to anti-cancer drugs. We
use a statistical physics model to study the evolution of a population of
malignant cells exposed to drug gradients, where drug resistance emerges via a
mutational pathway involving multiple mutations. We show that a non-uniform
drug distribution has the potential to accelerate the emergence of resistance
when the mutational pathway involves a long sequence of mutants with increasing
resistance, but if the pathway is short or crosses a fitness valley, the
evolution of resistance may actually be slowed down by drug gradients. These
predictions can be verified experimentally, and may help to improve strategies
for combatting the emergence of resistance.
| [
{
"created": "Fri, 24 Feb 2012 12:20:34 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Mar 2012 22:57:34 GMT",
"version": "v2"
},
{
"created": "Mon, 26 Mar 2012 09:44:21 GMT",
"version": "v3"
},
{
"created": "Wed, 15 Aug 2012 13:13:46 GMT",
"version": "v4"
}
] | 2015-06-04 | [
[
"Greulich",
"Philip",
""
],
[
"Waclaw",
"Bartlomiej",
""
],
[
"Allen",
"Rosalind J.",
""
]
] | Drug gradients are believed to play an important role in the evolution of bacteria resistant to antibiotics and tumors resistant to anti-cancer drugs. We use a statistical physics model to study the evolution of a population of malignant cells exposed to drug gradients, where drug resistance emerges via a mutational pathway involving multiple mutations. We show that a non-uniform drug distribution has the potential to accelerate the emergence of resistance when the mutational pathway involves a long sequence of mutants with increasing resistance, but if the pathway is short or crosses a fitness valley, the evolution of resistance may actually be slowed down by drug gradients. These predictions can be verified experimentally, and may help to improve strategies for combatting the emergence of resistance. |
2009.06576 | Miguel Navascues | Miguel Navascues, Costantino Budroni and Yelena Guryanova | Disease control as an optimization problem | New material: effect of vaccination campaigns on the minimum time
under lockdown, use of optimization constraints to control the complexity of
the generated policies for disease control, methods to optimize over weekly
adaptive lockdown policies. The current pre-print is close to the published
version | PLOS ONE 16(9): e0257958 (2021) | 10.1371/journal.pone.0257958 | null | q-bio.PE cs.LG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the context of epidemiology, policies for disease control are often
devised through a mixture of intuition and brute-force, whereby the set of
logically conceivable policies is narrowed down to a small family described by
a few parameters, following which linearization or grid search is used to
identify the optimal policy within the set. This scheme runs the risk of
leaving out more complex (and perhaps counter-intuitive) policies for disease
control that could tackle the disease more efficiently. In this article, we use
techniques from convex optimization theory and machine learning to conduct
optimizations over disease policies described by hundreds of parameters. In
contrast to past approaches for policy optimization based on control theory,
our framework can deal with arbitrary uncertainties on the initial conditions
and model parameters controlling the spread of the disease, and stochastic
models. In addition, our methods allow for optimization over policies which
remain constant over weekly periods, specified by either continuous or discrete
(e.g.: lockdown on/off) government measures. We illustrate our approach by
minimizing the total time required to eradicate COVID-19 within the
Susceptible-Exposed-Infected-Recovered (SEIR) model proposed by Kissler
\emph{et al.} (March, 2020).
| [
{
"created": "Mon, 14 Sep 2020 17:07:41 GMT",
"version": "v1"
},
{
"created": "Tue, 15 Sep 2020 14:58:42 GMT",
"version": "v2"
},
{
"created": "Mon, 1 Mar 2021 11:21:40 GMT",
"version": "v3"
},
{
"created": "Thu, 30 Sep 2021 09:11:02 GMT",
"version": "v4"
}
] | 2021-10-04 | [
[
"Navascues",
"Miguel",
""
],
[
"Budroni",
"Costantino",
""
],
[
"Guryanova",
"Yelena",
""
]
] | In the context of epidemiology, policies for disease control are often devised through a mixture of intuition and brute-force, whereby the set of logically conceivable policies is narrowed down to a small family described by a few parameters, following which linearization or grid search is used to identify the optimal policy within the set. This scheme runs the risk of leaving out more complex (and perhaps counter-intuitive) policies for disease control that could tackle the disease more efficiently. In this article, we use techniques from convex optimization theory and machine learning to conduct optimizations over disease policies described by hundreds of parameters. In contrast to past approaches for policy optimization based on control theory, our framework can deal with arbitrary uncertainties on the initial conditions and model parameters controlling the spread of the disease, and stochastic models. In addition, our methods allow for optimization over policies which remain constant over weekly periods, specified by either continuous or discrete (e.g.: lockdown on/off) government measures. We illustrate our approach by minimizing the total time required to eradicate COVID-19 within the Susceptible-Exposed-Infected-Recovered (SEIR) model proposed by Kissler \emph{et al.} (March, 2020). |
1303.1442 | Ralph Brinks | Ralph Brinks | Surveillance of the Incidence of Noncommunicable Diseases (NCDs) with
Prevalence Data: Theory and Application to Diabetes in Denmark | 10 pages, 3 figures | null | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Secular trends of the incidence of NCDs are especially important as they
indicate changes of the risk profile of a population. The article describes a
method for detecting secular trends in the incidence from a series of
prevalence data - without requiring costly follow-up studies or running a
register. After describing the theory, the method is applied to the incidence
of diabetes in Denmark.
| [
{
"created": "Wed, 6 Mar 2013 20:13:57 GMT",
"version": "v1"
}
] | 2013-03-07 | [
[
"Brinks",
"Ralph",
""
]
] | Secular trends of the incidence of NCDs are especially important as they indicate changes of the risk profile of a population. The article describes a method for detecting secular trends in the incidence from a series of prevalence data - without requiring costly follow-up studies or running a register. After describing the theory, the method is applied to the incidence of diabetes in Denmark. |
1908.10779 | Tomislav Plesa Dr | Tomislav Plesa, Guy-Bart Stan, Thomas E. Ouldridge and Wooli Bae | Robust control of biochemical reaction networks via stochastic morphing | null | null | null | null | q-bio.MN math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Synthetic biology is an interdisciplinary field aiming to design biochemical
systems with desired behaviors. To this end, molecular controllers have been
developed which, when embedded into a pre-existing ambient biochemical network,
control the dynamics of the underlying target molecular species. When
integrated into smaller compartments, such as biological cells in vivo, or
vesicles in vitro, controllers have to be calibrated to factor in the intrinsic
noise. In this context, molecular controllers put forward in the literature
have focused on manipulating the mean (first moment), and reducing the variance
(second moment), of the target species. However, many critical biochemical
processes are realized via higher-order moments, particularly the number and
configuration of the modes (maxima) of the probability distributions. To bridge
the gap, a controller called stochastic morpher is put forward in this paper,
inspired by gene-regulatory networks, which, under suitable time-scale
separations, morphs the probability distribution of the target species into a
desired predefined form. The morphing can be performed at the lower-resolution,
allowing one to achieve desired multi-modality/multi-stability, and at the
higher-resolution, allowing one to achieve arbitrary probability distributions.
Properties of the controller, such as robust perfect adaptation and
convergence, are rigorously established, and demonstrated on various examples.
Also proposed is a blueprint for an experimental implementation of stochastic
morpher.
| [
{
"created": "Wed, 28 Aug 2019 15:29:02 GMT",
"version": "v1"
}
] | 2019-08-29 | [
[
"Plesa",
"Tomislav",
""
],
[
"Stan",
"Guy-Bart",
""
],
[
"Ouldridge",
"Thomas E.",
""
],
[
"Bae",
"Wooli",
""
]
] | Synthetic biology is an interdisciplinary field aiming to design biochemical systems with desired behaviors. To this end, molecular controllers have been developed which, when embedded into a pre-existing ambient biochemical network, control the dynamics of the underlying target molecular species. When integrated into smaller compartments, such as biological cells in vivo, or vesicles in vitro, controllers have to be calibrated to factor in the intrinsic noise. In this context, molecular controllers put forward in the literature have focused on manipulating the mean (first moment), and reducing the variance (second moment), of the target species. However, many critical biochemical processes are realized via higher-order moments, particularly the number and configuration of the modes (maxima) of the probability distributions. To bridge the gap, a controller called stochastic morpher is put forward in this paper, inspired by gene-regulatory networks, which, under suitable time-scale separations, morphs the probability distribution of the target species into a desired predefined form. The morphing can be performed at the lower-resolution, allowing one to achieve desired multi-modality/multi-stability, and at the higher-resolution, allowing one to achieve arbitrary probability distributions. Properties of the controller, such as robust perfect adaptation and convergence, are rigorously established, and demonstrated on various examples. Also proposed is a blueprint for an experimental implementation of stochastic morpher. |
1303.7189 | Stefan Schuster | Stefan Schuster | The combinatorial multitude of fatty acids can be described by Fibonacci
numbers | null | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The famous series of Fibonacci numbers is defined by a recursive equation
saying that each number is the sum of its two predecessors, with the initial
condition that the first two numbers are equal to unity. Here, we show that the
numbers of fatty acids (straight-chain aliphatic monocarboxylic acids) with n
carbon atoms is exactly given by the Fibonacci numbers. Thus, by investing one
more carbon atom into extending a fatty acid, an organism can increase the
variability of the fatty acids approximately by the factor of the Golden
section, 1.618. As the Fibonacci series grows asymptotically exponentially, our
results are in line with combinatorial complexity found generally in biology.
We also outline potential extensions of the calculations to modified (e.g.,
hydroxylated) fatty acids. The presented enumeration method may be of interest
for lipidomics, combinatorial chemistry, synthetic biology and the theory of
evolution (including prebiotic evolution).
| [
{
"created": "Thu, 28 Mar 2013 17:35:08 GMT",
"version": "v1"
}
] | 2013-03-29 | [
[
"Schuster",
"Stefan",
""
]
] | The famous series of Fibonacci numbers is defined by a recursive equation saying that each number is the sum of its two predecessors, with the initial condition that the first two numbers are equal to unity. Here, we show that the numbers of fatty acids (straight-chain aliphatic monocarboxylic acids) with n carbon atoms is exactly given by the Fibonacci numbers. Thus, by investing one more carbon atom into extending a fatty acid, an organism can increase the variability of the fatty acids approximately by the factor of the Golden section, 1.618. As the Fibonacci series grows asymptotically exponentially, our results are in line with combinatorial complexity found generally in biology. We also outline potential extensions of the calculations to modified (e.g., hydroxylated) fatty acids. The presented enumeration method may be of interest for lipidomics, combinatorial chemistry, synthetic biology and the theory of evolution (including prebiotic evolution). |
2309.01384 | Yuanyuan Wei | Yuanyuan Wei, Sai Mu Dalike Abaxi, Nawaz Mehmood, Luoquan Li, Fuyang
Qu, Guangyao Cheng, Dehua Hu, Yi-Ping Ho, Scott Wu Yuan, and Ho-Pui Ho | Deep Learning Approach for Large-Scale, Real-Time Quantification of
Green Fluorescent Protein-Labeled Biological Samples in Microreactors | 23 pages, 6 figures, 1 table | null | null | null | q-bio.QM cs.SY eess.IV eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Absolute quantification of biological samples entails determining expression
levels in precise numerical copies, offering enhanced accuracy and superior
performance for rare templates. However, existing methodologies suffer from
significant limitations: flow cytometers are both costly and intricate, while
fluorescence imaging relying on software tools or manual counting is
time-consuming and prone to inaccuracies. In this study, we have devised a
comprehensive deep-learning-enabled pipeline that enables the automated
segmentation and classification of GFP (green fluorescent protein)-labeled
microreactors, facilitating real-time absolute quantification. Our findings
demonstrate the efficacy of this technique in accurately predicting the sizes
and occupancy status of microreactors using standard laboratory fluorescence
microscopes, thereby providing precise measurements of template concentrations.
Notably, our approach exhibits an analysis speed of quantifying over 2,000
microreactors (across 10 images) within remarkably 2.5 seconds, and a dynamic
range spanning from 56.52 to 1569.43 copies per micron-liter. Furthermore, our
Deep-dGFP algorithm showcases remarkable generalization capabilities, as it can
be directly applied to various GFP-labeling scenarios, including droplet-based,
microwell-based, and agarose-based biological applications. To the best of our
knowledge, this represents the first successful implementation of an all-in-one
image analysis algorithm in droplet digital PCR (polymerase chain reaction),
microwell digital PCR, droplet single-cell sequencing, agarose digital PCR, and
bacterial quantification, without necessitating any transfer learning steps,
modifications, or retraining procedures. We firmly believe that our Deep-dGFP
technique will be readily embraced by biomedical laboratories and holds
potential for further development in related clinical applications.
| [
{
"created": "Mon, 4 Sep 2023 06:22:33 GMT",
"version": "v1"
}
] | 2023-09-06 | [
[
"Wei",
"Yuanyuan",
""
],
[
"Abaxi",
"Sai Mu Dalike",
""
],
[
"Mehmood",
"Nawaz",
""
],
[
"Li",
"Luoquan",
""
],
[
"Qu",
"Fuyang",
""
],
[
"Cheng",
"Guangyao",
""
],
[
"Hu",
"Dehua",
""
],
[
"Ho",
... | Absolute quantification of biological samples entails determining expression levels in precise numerical copies, offering enhanced accuracy and superior performance for rare templates. However, existing methodologies suffer from significant limitations: flow cytometers are both costly and intricate, while fluorescence imaging relying on software tools or manual counting is time-consuming and prone to inaccuracies. In this study, we have devised a comprehensive deep-learning-enabled pipeline that enables the automated segmentation and classification of GFP (green fluorescent protein)-labeled microreactors, facilitating real-time absolute quantification. Our findings demonstrate the efficacy of this technique in accurately predicting the sizes and occupancy status of microreactors using standard laboratory fluorescence microscopes, thereby providing precise measurements of template concentrations. Notably, our approach exhibits an analysis speed of quantifying over 2,000 microreactors (across 10 images) within remarkably 2.5 seconds, and a dynamic range spanning from 56.52 to 1569.43 copies per micron-liter. Furthermore, our Deep-dGFP algorithm showcases remarkable generalization capabilities, as it can be directly applied to various GFP-labeling scenarios, including droplet-based, microwell-based, and agarose-based biological applications. To the best of our knowledge, this represents the first successful implementation of an all-in-one image analysis algorithm in droplet digital PCR (polymerase chain reaction), microwell digital PCR, droplet single-cell sequencing, agarose digital PCR, and bacterial quantification, without necessitating any transfer learning steps, modifications, or retraining procedures. We firmly believe that our Deep-dGFP technique will be readily embraced by biomedical laboratories and holds potential for further development in related clinical applications. |
2111.13412 | Mathieu Andraud | Mathieu Andraud, Pachka Hammami, Brandon H. Hayes, Jason A. Galvis,
Timoth\'ee Vergne, Gustavo Machado, Nicolas Rose | Modelling African swine fever virus spread in pigs using time-respective
network data: scientific support for decision-makers | 26 pages; 7 figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | African Swine Fever (ASF) represents the main threat to swine production,
with heavy economic consequences for both farmers and the food industry. The
spread of the virus that causes ASF through Europe raises the issues of
identifying transmission routes and assessing their relative contributions in
order to provide insights to stakeholders for adapted surveillance and control
measures. A simulation model was developed to assess ASF spread over the
commercial swine network in France. The model was designed from raw movement
data and actual farm characteristics. A metapopulation approach was used, with
transmission processes at the herd level potentially leading, through a
reaction-diffusion process, to external spread to epidemiologically connected
herds. Three transmission routes were considered: local transmission (e.g.
fomites, material exchange), movement of animals from infected to susceptible
sites, and transit of trucks without physical animal exchange. Surveillance was
based on prevalence and mortality detection thresholds, which triggered control
measures based on movement ban for detected herds and epidemiologically related
herds. The time from infection to detection varied between 8 and 21 days,
depending on the detection criteria, but was also dependent on the types of
herds in which the infection was introduced. Movement restrictions effectively
reduced the transmission between herds, but local transmission was nevertheless
observed in higher proportions highlighting the need of global awareness of all
actors of the swine industry to mitigate the risk of local spread. Raw movement
data were directly used to build a dynamic network on a realistic time-scale.
This approach allows for a rapid update of input data without any
pre-treatment, which could be important in terms of reactivity, should an
introduction occur.
| [
{
"created": "Fri, 26 Nov 2021 10:31:12 GMT",
"version": "v1"
}
] | 2021-11-29 | [
[
"Andraud",
"Mathieu",
""
],
[
"Hammami",
"Pachka",
""
],
[
"Hayes",
"Brandon H.",
""
],
[
"Galvis",
"Jason A.",
""
],
[
"Vergne",
"Timothée",
""
],
[
"Machado",
"Gustavo",
""
],
[
"Rose",
"Nicolas",
""
]
... | African Swine Fever (ASF) represents the main threat to swine production, with heavy economic consequences for both farmers and the food industry. The spread of the virus that causes ASF through Europe raises the issues of identifying transmission routes and assessing their relative contributions in order to provide insights to stakeholders for adapted surveillance and control measures. A simulation model was developed to assess ASF spread over the commercial swine network in France. The model was designed from raw movement data and actual farm characteristics. A metapopulation approach was used, with transmission processes at the herd level potentially leading, through a reaction-diffusion process, to external spread to epidemiologically connected herds. Three transmission routes were considered: local transmission (e.g. fomites, material exchange), movement of animals from infected to susceptible sites, and transit of trucks without physical animal exchange. Surveillance was based on prevalence and mortality detection thresholds, which triggered control measures based on movement ban for detected herds and epidemiologically related herds. The time from infection to detection varied between 8 and 21 days, depending on the detection criteria, but was also dependent on the types of herds in which the infection was introduced. Movement restrictions effectively reduced the transmission between herds, but local transmission was nevertheless observed in higher proportions highlighting the need of global awareness of all actors of the swine industry to mitigate the risk of local spread. Raw movement data were directly used to build a dynamic network on a realistic time-scale. This approach allows for a rapid update of input data without any pre-treatment, which could be important in terms of reactivity, should an introduction occur. |
2004.02859 | Ka-Ming Tam | Ka-Ming Tam, Nicholas Walker, Juana Moreno | Projected Development of COVID-19 in Louisiana | 6 pages, 5 figures | null | null | null | q-bio.PE physics.data-an physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | At the time of writing, Louisiana has the third highest COVID-19 infection
per capita in the United States. The state government issued a stay-at-home
order effective March 23rd. We analyze the projected spread of COVID-19 in
Louisiana without including the effects of the stay-at-home order. We predict
that a large fraction of the state population would be infected without the
mitigation efforts, and would certainly overwhelm the capacity of Louisiana
health care system. We further predict the outcomes with different degrees of
reduction in the infection rate. More than 70% of reduction is required to cap
the number of infected to under one million.
| [
{
"created": "Mon, 6 Apr 2020 17:50:08 GMT",
"version": "v1"
}
] | 2020-04-07 | [
[
"Tam",
"Ka-Ming",
""
],
[
"Walker",
"Nicholas",
""
],
[
"Moreno",
"Juana",
""
]
] | At the time of writing, Louisiana has the third highest COVID-19 infection per capita in the United States. The state government issued a stay-at-home order effective March 23rd. We analyze the projected spread of COVID-19 in Louisiana without including the effects of the stay-at-home order. We predict that a large fraction of the state population would be infected without the mitigation efforts, and would certainly overwhelm the capacity of Louisiana health care system. We further predict the outcomes with different degrees of reduction in the infection rate. More than 70% of reduction is required to cap the number of infected to under one million. |
2304.02679 | Tung D. Nguyen | Badal Joshi and Tung D. Nguyen | Bifunctional enzyme provides absolute concentration robustness in
multisite covalent modification networks | 28 pages | null | null | null | q-bio.MN math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biochemical covalent modification networks exhibit a remarkable suite of
steady state and dynamical properties such as multistationarity, oscillations,
ultrasensitivity and absolute concentration robustness. This paper focuses on
conditions required for a network to have a species with absolute concentration
robustness. We find that the robustness in a substrate is endowed by its
interaction with a bifunctional enzyme, which is an enzyme that has different
roles when isolated versus when bound as a substrate-enzyme complex. When
isolated, the bifunctional enzyme promotes production of more molecules of the
robust species while when bound, the same enzyme facilitates degradation of the
robust species. These dual actions produce robustness in the large class of
covalent modification networks. For each network of this type, we find the
network conditions for the presence of robustness, the species that has
robustness, and its robustness value. The unified approach of simultaneously
analyzing a large class of networks for a single property, i.e. absolute
concentration robustness, reveals the underlying mechanism of the action of
bifunctional enzyme while simultaneously providing a precise mathematical
description of bifunctionality.
| [
{
"created": "Wed, 5 Apr 2023 18:16:35 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Feb 2024 21:46:42 GMT",
"version": "v2"
}
] | 2024-02-29 | [
[
"Joshi",
"Badal",
""
],
[
"Nguyen",
"Tung D.",
""
]
] | Biochemical covalent modification networks exhibit a remarkable suite of steady state and dynamical properties such as multistationarity, oscillations, ultrasensitivity and absolute concentration robustness. This paper focuses on conditions required for a network to have a species with absolute concentration robustness. We find that the robustness in a substrate is endowed by its interaction with a bifunctional enzyme, which is an enzyme that has different roles when isolated versus when bound as a substrate-enzyme complex. When isolated, the bifunctional enzyme promotes production of more molecules of the robust species while when bound, the same enzyme facilitates degradation of the robust species. These dual actions produce robustness in the large class of covalent modification networks. For each network of this type, we find the network conditions for the presence of robustness, the species that has robustness, and its robustness value. The unified approach of simultaneously analyzing a large class of networks for a single property, i.e. absolute concentration robustness, reveals the underlying mechanism of the action of bifunctional enzyme while simultaneously providing a precise mathematical description of bifunctionality. |
q-bio/0602018 | Harold P. de Vladar | Harold P. de Vladar and Ido Pen | Determinism, Noise, and Spurious Estimations in a Generalised Model of
Population Growth | Accepted in Physica A. Updated with [minor] observations from the
refferee | H.P. de Vladar and I. Pen. Determinism, noise, and spurious
estimations in a generalised model of population growth. Physica A (2007)
vol. 373 pp. 477-485 | 10.1016/j.physa.2006.06.025 | null | q-bio.PE q-bio.QM | null | We study a generalised model of population growth in which the state variable
is population growth rate instead of population size. Stochastic parametric
perturbations, modelling phenotypic variability, lead to a Langevin system with
two sources of multiplicative noise. The stationary probability distributions
have two characteristic power-law scales. Numerical simulations show that noise
suppresses the explosion of the growth rate which occurs in the deterministic
counterpart. Instead, in different parameter regimes populations will grow with
``anomalous'' stochastic rates and (i) stabilise at ``random carrying
capacities'', or (ii) go extinct in random times. Using logistic fits to
reconstruct the simulated data, we find that even highly significant
estimations do not recover or reflect information about the deterministic part
of the process. Therefore, the logistic interpretation is not biologically
meaningful. These results have implications for distinct model-aided
calculations in biological situations because these kinds of estimations could
lead to spurious conclusions.
| [
{
"created": "Tue, 14 Feb 2006 12:33:18 GMT",
"version": "v1"
},
{
"created": "Fri, 24 Mar 2006 11:15:54 GMT",
"version": "v2"
},
{
"created": "Wed, 3 May 2006 11:19:47 GMT",
"version": "v3"
}
] | 2010-10-15 | [
[
"de Vladar",
"Harold P.",
""
],
[
"Pen",
"Ido",
""
]
] | We study a generalised model of population growth in which the state variable is population growth rate instead of population size. Stochastic parametric perturbations, modelling phenotypic variability, lead to a Langevin system with two sources of multiplicative noise. The stationary probability distributions have two characteristic power-law scales. Numerical simulations show that noise suppresses the explosion of the growth rate which occurs in the deterministic counterpart. Instead, in different parameter regimes populations will grow with ``anomalous'' stochastic rates and (i) stabilise at ``random carrying capacities'', or (ii) go extinct in random times. Using logistic fits to reconstruct the simulated data, we find that even highly significant estimations do not recover or reflect information about the deterministic part of the process. Therefore, the logistic interpretation is not biologically meaningful. These results have implications for distinct model-aided calculations in biological situations because these kinds of estimations could lead to spurious conclusions. |
1603.03796 | Gabriel Balaban | Gabriel Balaban and Martin S. Aln{\ae}s and Joakim Sundnes and Marie
E. Rognes | Adjoint Multi-Start Based Estimation of Cardiac Hyperelastic Material
Parameters using Shear Data | null | null | null | null | q-bio.TO physics.comp-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cardiac muscle tissue during relaxation is commonly modelled as a
hyperelastic material with strongly nonlinear and anisotropic stress response.
Adapting the behavior of such a model to experimental or patient data gives
rise to a parameter estimation problem which involves a significant number of
parameters. Gradient-based optimization algorithms provide a way to solve such
nonlinear parameter estimation problems with relatively few iterations, but
require the gradient of the objective functional with respect to the model
parameters. This gradient has traditionally been obtained using finite
differences, the calculation of which scales linearly with the number of model
parameters, and introduces a differencing error. By using an automatically
derived adjoint equation, we are able to calculate this gradient more
efficiently, and with minimal implementation effort. We test this adjoint
framework on a least squares fitting problem involving data from simple shear
tests on cardiac tissue samples. A second challenge which arises in
gradient-based optimization is the dependency of the algorithm on a suitable
initial guess. We show how a multi-start procedure can alleviate this
dependency. Finally, we provide estimates for the material parameters of the
Holzapfel and Ogden strain energy law using finite element models together with
experimental shear data.
| [
{
"created": "Fri, 11 Mar 2016 21:47:42 GMT",
"version": "v1"
}
] | 2016-03-16 | [
[
"Balaban",
"Gabriel",
""
],
[
"Alnæs",
"Martin S.",
""
],
[
"Sundnes",
"Joakim",
""
],
[
"Rognes",
"Marie E.",
""
]
] | Cardiac muscle tissue during relaxation is commonly modelled as a hyperelastic material with strongly nonlinear and anisotropic stress response. Adapting the behavior of such a model to experimental or patient data gives rise to a parameter estimation problem which involves a significant number of parameters. Gradient-based optimization algorithms provide a way to solve such nonlinear parameter estimation problems with relatively few iterations, but require the gradient of the objective functional with respect to the model parameters. This gradient has traditionally been obtained using finite differences, the calculation of which scales linearly with the number of model parameters, and introduces a differencing error. By using an automatically derived adjoint equation, we are able to calculate this gradient more efficiently, and with minimal implementation effort. We test this adjoint framework on a least squares fitting problem involving data from simple shear tests on cardiac tissue samples. A second challenge which arises in gradient-based optimization is the dependency of the algorithm on a suitable initial guess. We show how a multi-start procedure can alleviate this dependency. Finally, we provide estimates for the material parameters of the Holzapfel and Ogden strain energy law using finite element models together with experimental shear data. |
2211.01603 | Saish Jaiswal | Saish Jaiswal, Shreya Nema, Hema A Murthy, Manikandan Narayanan | Using Signal Processing in Tandem With Adapted Mixture Models for
Classifying Genomic Signals | null | null | null | null | q-bio.GN cs.LG eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genomic signal processing has been used successfully in bioinformatics to
analyze biomolecular sequences and gain varied insights into DNA structure,
gene organization, protein binding, sequence evolution, etc. But challenges
remain in finding the appropriate spectral representation of a biomolecular
sequence, especially when multiple variable-length sequences need to be handled
consistently. In this study, we address this challenge in the context of the
well-studied problem of classifying genomic sequences into different taxonomic
units (strain, phyla, order, etc.). We propose a novel technique that employs
signal processing in tandem with Gaussian mixture models to improve the
spectral representation of a sequence and subsequently the taxonomic
classification accuracies. The sequences are first transformed into spectra,
and projected to a subspace, where sequences belonging to different taxons are
better distinguishable. Our method outperforms a similar state-of-the-art
method on established benchmark datasets by an absolute margin of 6.06%
accuracy.
| [
{
"created": "Thu, 3 Nov 2022 06:10:55 GMT",
"version": "v1"
}
] | 2022-11-04 | [
[
"Jaiswal",
"Saish",
""
],
[
"Nema",
"Shreya",
""
],
[
"Murthy",
"Hema A",
""
],
[
"Narayanan",
"Manikandan",
""
]
] | Genomic signal processing has been used successfully in bioinformatics to analyze biomolecular sequences and gain varied insights into DNA structure, gene organization, protein binding, sequence evolution, etc. But challenges remain in finding the appropriate spectral representation of a biomolecular sequence, especially when multiple variable-length sequences need to be handled consistently. In this study, we address this challenge in the context of the well-studied problem of classifying genomic sequences into different taxonomic units (strain, phyla, order, etc.). We propose a novel technique that employs signal processing in tandem with Gaussian mixture models to improve the spectral representation of a sequence and subsequently the taxonomic classification accuracies. The sequences are first transformed into spectra, and projected to a subspace, where sequences belonging to different taxons are better distinguishable. Our method outperforms a similar state-of-the-art method on established benchmark datasets by an absolute margin of 6.06% accuracy. |
1712.08828 | Sadyk Sayfullin | Sadyk Sayfullin, Fedor Akhmetov, Manuel Mazzara, Ruslan Mustafin and
Victor Rivera | Gene expression for simulation of biological tissue | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | BioDynaMo is a biological processes simulator developed by an international
community of researchers and software engineers working closely with
neuroscientists. The authors have been working on gene expression, i.e. the
process by which the heritable information in a gene - the sequence of DNA base
pairs - is made into a functional gene product, such as protein or RNA.
Typically, gene regulatory models employ either statistical or analytical
approaches, being the former already well understood and broadly used. In this
paper, we utilize analytical approaches representing the regulatory networks by
means of differential equations, such as Euler and Runge-Kutta methods. The two
solutions are implemented and have been submitted for inclusion in the
BioDynaMo project and are compared for accuracy and performance.
| [
{
"created": "Sat, 23 Dec 2017 19:51:19 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Mar 2018 14:27:15 GMT",
"version": "v2"
}
] | 2018-03-13 | [
[
"Sayfullin",
"Sadyk",
""
],
[
"Akhmetov",
"Fedor",
""
],
[
"Mazzara",
"Manuel",
""
],
[
"Mustafin",
"Ruslan",
""
],
[
"Rivera",
"Victor",
""
]
] | BioDynaMo is a biological processes simulator developed by an international community of researchers and software engineers working closely with neuroscientists. The authors have been working on gene expression, i.e. the process by which the heritable information in a gene - the sequence of DNA base pairs - is made into a functional gene product, such as protein or RNA. Typically, gene regulatory models employ either statistical or analytical approaches, being the former already well understood and broadly used. In this paper, we utilize analytical approaches representing the regulatory networks by means of differential equations, such as Euler and Runge-Kutta methods. The two solutions are implemented and have been submitted for inclusion in the BioDynaMo project and are compared for accuracy and performance. |
1908.10922 | Gregory Kiar | Gregory Kiar, Pablo de Oliveira Castro, Pierre Rioux, Eric Petit,
Shawn T. Brown, Alan C. Evans, Tristan Glatard | Comparing Perturbation Models for Evaluating Stability of Neuroimaging
Pipelines | 9 pages, 5 figures, 1 table, paper published at IJHPCA | null | null | null | q-bio.NC eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A lack of software reproducibility has become increasingly apparent in the
last several years, calling into question the validity of scientific findings
affected by published tools. Reproducibility issues may have numerous sources
of error, including the underlying numerical stability of algorithms and
implementations employed. Various forms of instability have been observed in
neuroimaging, including across operating system versions, minor noise
injections, and implementation of theoretically equivalent algorithms. In this
paper we explore the effect of various perturbation methods on a typical
neuroimaging pipeline through the use of i) targeted noise injections, ii)
Monte Carlo Arithmetic, and iii) varying operating systems to identify the
quality and severity of their impact. The work presented here demonstrates that
even low order computational models such as the connectome estimation pipeline
that we used are susceptible to noise. This suggests that stability is a
relevant axis upon which tools should be compared, developed, or improved,
alongside more commonly considered axes such as accuracy/biological feasibility
or performance. The heterogeneity observed across participants clearly
illustrates that stability is a property of not just the data or tools
independently, but their interaction. Characterization of stability should
therefore be evaluated for specific analyses and performed on a representative
set of subjects for consideration in subsequent statistical testing.
Additionally, identifying how this relationship scales to higher-order models
is an exciting next step which will be explored. Finally, the joint application
of perturbation methods with post-processing approaches such as bagging or
signal normalization may lead to the development of more numerically stable
analyses while maintaining sensitivity to meaningful variation.
| [
{
"created": "Wed, 28 Aug 2019 19:39:11 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Oct 2019 16:23:33 GMT",
"version": "v2"
},
{
"created": "Wed, 22 Apr 2020 16:10:38 GMT",
"version": "v3"
}
] | 2020-04-23 | [
[
"Kiar",
"Gregory",
""
],
[
"Castro",
"Pablo de Oliveira",
""
],
[
"Rioux",
"Pierre",
""
],
[
"Petit",
"Eric",
""
],
[
"Brown",
"Shawn T.",
""
],
[
"Evans",
"Alan C.",
""
],
[
"Glatard",
"Tristan",
""
]
] | A lack of software reproducibility has become increasingly apparent in the last several years, calling into question the validity of scientific findings affected by published tools. Reproducibility issues may have numerous sources of error, including the underlying numerical stability of algorithms and implementations employed. Various forms of instability have been observed in neuroimaging, including across operating system versions, minor noise injections, and implementation of theoretically equivalent algorithms. In this paper we explore the effect of various perturbation methods on a typical neuroimaging pipeline through the use of i) targeted noise injections, ii) Monte Carlo Arithmetic, and iii) varying operating systems to identify the quality and severity of their impact. The work presented here demonstrates that even low order computational models such as the connectome estimation pipeline that we used are susceptible to noise. This suggests that stability is a relevant axis upon which tools should be compared, developed, or improved, alongside more commonly considered axes such as accuracy/biological feasibility or performance. The heterogeneity observed across participants clearly illustrates that stability is a property of not just the data or tools independently, but their interaction. Characterization of stability should therefore be evaluated for specific analyses and performed on a representative set of subjects for consideration in subsequent statistical testing. Additionally, identifying how this relationship scales to higher-order models is an exciting next step which will be explored. Finally, the joint application of perturbation methods with post-processing approaches such as bagging or signal normalization may lead to the development of more numerically stable analyses while maintaining sensitivity to meaningful variation. |
2005.01820 | Meriem Laleg Dr. | Mohamed Bahloul, Abderrazak Chahid and Taous Meriem Laleg-Kirati | Fractional-order SEIQRDP model for simulating the dynamics of COVID-19
epidemic | 11 Pages, 4 figures | null | null | null | q-bio.PE cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The novel coronavirus disease (COVID-19) is known as the causative virus of
outbreak pneumonia initially recognized in the mainland of China, late December
2019. COVID-19 reaches out to many countries in the world, and the number of
daily cases continues to increase rapidly. In order to simulate, track, and
forecast the trend of the virus spread, several mathematical and statistical
models have been developed.
Susceptible-Exposed-Infected-Quarantined-Recovered-Death-Insusceptible
(SEIQRDP) model is one of the most promising dynamic systems that has been
proposed for estimating the transmissibility of the COVID-19. In the present
study, we propose a Fractional-order SEIQRDP model to analyze the COVID-19
epidemic. The Fractional-order paradigm offers a flexible, appropriate, and
reliable framework for pandemic growth characterization. In fact,
fractional-order operator is not local and consider the memory of the
variables. Hence, it takes into account the sub-diffusion process of confirmed
and recovered cases growth. The results of the validation of the model using
real COVID-19 data are presented, and the pertinence of the proposed model to
analyze, understand, and predict the epidemic is discussed.
| [
{
"created": "Mon, 4 May 2020 20:03:34 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Jun 2020 06:27:08 GMT",
"version": "v2"
}
] | 2020-06-09 | [
[
"Bahloul",
"Mohamed",
""
],
[
"Chahid",
"Abderrazak",
""
],
[
"Laleg-Kirati",
"Taous Meriem",
""
]
] | The novel coronavirus disease (COVID-19) is known as the causative virus of outbreak pneumonia initially recognized in the mainland of China, late December 2019. COVID-19 reaches out to many countries in the world, and the number of daily cases continues to increase rapidly. In order to simulate, track, and forecast the trend of the virus spread, several mathematical and statistical models have been developed. Susceptible-Exposed-Infected-Quarantined-Recovered-Death-Insusceptible (SEIQRDP) model is one of the most promising dynamic systems that has been proposed for estimating the transmissibility of the COVID-19. In the present study, we propose a Fractional-order SEIQRDP model to analyze the COVID-19 epidemic. The Fractional-order paradigm offers a flexible, appropriate, and reliable framework for pandemic growth characterization. In fact, fractional-order operator is not local and consider the memory of the variables. Hence, it takes into account the sub-diffusion process of confirmed and recovered cases growth. The results of the validation of the model using real COVID-19 data are presented, and the pertinence of the proposed model to analyze, understand, and predict the epidemic is discussed. |
2304.09931 | Azadeh Aghaeeyan | Azadeh Aghaeeyan, Pouria Ramazi, and Mark A. Lewis | Revealing the unseen: Likely half of the Americans relied on others'
experience when deciding on taking the COVID-19 vaccine | The population was narrowed to those of 12 years and older | Bull Math Biol 86, 72 (2024) | 10.1007/s11538-024-01290-4 | null | q-bio.PE physics.soc-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Efficient coverage for newly developed vaccines requires knowing which groups
of individuals will accept the vaccine immediately and which will take longer
to accept or never accept. Of those who may eventually accept the vaccine,
there are two main types: success-based learners, basing their decisions on
others' satisfaction, and myopic rationalists, attending to their own immediate
perceived benefit. We used COVID-19 vaccination data to fit a mechanistic model
capturing the distinct effects of the two types on the vaccination progress. We
estimated that 47 percent of Americans behaved as myopic rationalist with a
high variations across the jurisdictions, from 31 percent in Mississippi to 76
percent in Vermont. The proportion was correlated with the vaccination
coverage, proportion of votes in favor of Democrats in 2020 presidential
election, and education score.
| [
{
"created": "Wed, 19 Apr 2023 19:15:12 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Aug 2023 17:56:08 GMT",
"version": "v2"
}
] | 2024-05-13 | [
[
"Aghaeeyan",
"Azadeh",
""
],
[
"Ramazi",
"Pouria",
""
],
[
"Lewis",
"Mark A.",
""
]
] | Efficient coverage for newly developed vaccines requires knowing which groups of individuals will accept the vaccine immediately and which will take longer to accept or never accept. Of those who may eventually accept the vaccine, there are two main types: success-based learners, basing their decisions on others' satisfaction, and myopic rationalists, attending to their own immediate perceived benefit. We used COVID-19 vaccination data to fit a mechanistic model capturing the distinct effects of the two types on the vaccination progress. We estimated that 47 percent of Americans behaved as myopic rationalist with a high variations across the jurisdictions, from 31 percent in Mississippi to 76 percent in Vermont. The proportion was correlated with the vaccination coverage, proportion of votes in favor of Democrats in 2020 presidential election, and education score. |
1704.06678 | Glenn Young | Glenn Young, Mahmut Demir, Hanna Salman, G. Bard Ermentrout, Jonathan
E. Rubin | Interactions of Solitary Pulses of E. coli in a One-Dimensional Nutrient
Gradient | 34 pages, 12 figures | null | null | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study an anomalous behavior observed in interacting E. coli populations.
When two populations of E. coli are placed on opposite ends of a long channel
with a supply of nutrient between them, they will travel as pulses toward one
another up the nutrient gradient. We present experimental evidence that,
counterintuitively, the two pulses will in some cases change direction and be-
gin moving away from each other and the nutrient back toward the end of the
channel from which they originated. Simulations of the Keller-Segel chemotaxis
model reproduce the experimental results. To gain better insight to the
phenomenon, we introduce a heuristic approximation to the spatial profile of
each population in the Keller-Segel model to derive a system of ordinary
differential equations approximating the temporal dynamics of its center of
mass and width. This approximate model simplifies analysis of the global
dynamics of the bacterial system and allows us to efficiently explore the
qualitative behavior changes across variations of parameters, and thereby
provides experimentally testable hypotheses about the mechanisms behind the
turnaround behavior.
| [
{
"created": "Fri, 21 Apr 2017 18:42:54 GMT",
"version": "v1"
}
] | 2017-04-25 | [
[
"Young",
"Glenn",
""
],
[
"Demir",
"Mahmut",
""
],
[
"Salman",
"Hanna",
""
],
[
"Ermentrout",
"G. Bard",
""
],
[
"Rubin",
"Jonathan E.",
""
]
] | We study an anomalous behavior observed in interacting E. coli populations. When two populations of E. coli are placed on opposite ends of a long channel with a supply of nutrient between them, they will travel as pulses toward one another up the nutrient gradient. We present experimental evidence that, counterintuitively, the two pulses will in some cases change direction and be- gin moving away from each other and the nutrient back toward the end of the channel from which they originated. Simulations of the Keller-Segel chemotaxis model reproduce the experimental results. To gain better insight to the phenomenon, we introduce a heuristic approximation to the spatial profile of each population in the Keller-Segel model to derive a system of ordinary differential equations approximating the temporal dynamics of its center of mass and width. This approximate model simplifies analysis of the global dynamics of the bacterial system and allows us to efficiently explore the qualitative behavior changes across variations of parameters, and thereby provides experimentally testable hypotheses about the mechanisms behind the turnaround behavior. |
2311.03409 | Chenwei Zhang | Chenwei Zhang, Khanh Dao Duc, Anne Condon | Visualizing DNA reaction trajectories with deep graph embedding
approaches | Published in Machine Learning for Structural Biology Workshop,
NeurIPS, 2022 | null | null | null | q-bio.BM cs.AI cs.HC cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Synthetic biologists and molecular programmers design novel nucleic acid
reactions, with many potential applications. Good visualization tools are
needed to help domain experts make sense of the complex outputs of folding
pathway simulations of such reactions. Here we present ViDa, a new approach for
visualizing DNA reaction folding trajectories over the energy landscape of
secondary structures. We integrate a deep graph embedding model with common
dimensionality reduction approaches, to map high-dimensional data onto 2D
Euclidean space. We assess ViDa on two well-studied and contrasting DNA
hybridization reactions. Our preliminary results suggest that ViDa's
visualization successfully separates trajectories with different folding
mechanisms, thereby providing useful insight to users, and is a big improvement
over the current state-of-the-art in DNA kinetics visualization.
| [
{
"created": "Mon, 6 Nov 2023 05:06:35 GMT",
"version": "v1"
}
] | 2023-11-08 | [
[
"Zhang",
"Chenwei",
""
],
[
"Duc",
"Khanh Dao",
""
],
[
"Condon",
"Anne",
""
]
] | Synthetic biologists and molecular programmers design novel nucleic acid reactions, with many potential applications. Good visualization tools are needed to help domain experts make sense of the complex outputs of folding pathway simulations of such reactions. Here we present ViDa, a new approach for visualizing DNA reaction folding trajectories over the energy landscape of secondary structures. We integrate a deep graph embedding model with common dimensionality reduction approaches, to map high-dimensional data onto 2D Euclidean space. We assess ViDa on two well-studied and contrasting DNA hybridization reactions. Our preliminary results suggest that ViDa's visualization successfully separates trajectories with different folding mechanisms, thereby providing useful insight to users, and is a big improvement over the current state-of-the-art in DNA kinetics visualization. |
1508.02598 | Steven Kelk | Steven Kelk and Georgios Stamoulis | A note on convex characters, Fibonacci numbers and exponential-time
algorithms | added a significant number of new results to the previous version (on
dynamic programming, g-spectra and so on) | null | null | null | q-bio.PE cs.DS math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phylogenetic trees are used to model evolution: leaves are labelled to
represent contemporary species ("taxa") and interior vertices represent extinct
ancestors. Informally, convex characters are measurements on the contemporary
species in which the subset of species (both contemporary and extinct) that
share a given state, form a connected subtree. Given an unrooted, binary
phylogenetic tree T on a set of n >= 2 taxa, a closed (but fairly opaque)
expression for the number of convex characters on T has been known since 1992,
and this is independent of the exact topology of T. In this note we prove that
this number is actually equal to the (2n-1)th Fibonacci number. Next, we define
g_k(T) to be the number of convex characters on T in which each state appears
on at least k taxa. We show that, somewhat curiously, g_2(T) is also
independent of the topology of T, and is equal to to the (n-1)th Fibonacci
number. As we demonstrate, this topological neutrality subsequently breaks down
for k >= 3. However, we show that for each fixed k >= 1, g_k(T) can be computed
in O(n) time and the set of characters thus counted can be efficiently listed
and sampled. We use these insights to give a simple but effective exact
algorithm for the NP-hard maximum parsimony distance problem that runs in time
$\Theta( \phi^{n} \cdot n^2 )$, where $\phi \approx 1.618...$ is the golden
ratio, and an exact algorithm which computes the tree bisection and
reconnection distance (equivalently, a maximum agreement forest) in time
$\Theta( \phi^{2n}\cdot \text{poly}(n))$, where $\phi^2 \approx 2.619$.
| [
{
"created": "Tue, 11 Aug 2015 13:59:46 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Jan 2016 18:00:05 GMT",
"version": "v2"
},
{
"created": "Wed, 27 Jul 2016 14:46:44 GMT",
"version": "v3"
}
] | 2016-07-28 | [
[
"Kelk",
"Steven",
""
],
[
"Stamoulis",
"Georgios",
""
]
] | Phylogenetic trees are used to model evolution: leaves are labelled to represent contemporary species ("taxa") and interior vertices represent extinct ancestors. Informally, convex characters are measurements on the contemporary species in which the subset of species (both contemporary and extinct) that share a given state, form a connected subtree. Given an unrooted, binary phylogenetic tree T on a set of n >= 2 taxa, a closed (but fairly opaque) expression for the number of convex characters on T has been known since 1992, and this is independent of the exact topology of T. In this note we prove that this number is actually equal to the (2n-1)th Fibonacci number. Next, we define g_k(T) to be the number of convex characters on T in which each state appears on at least k taxa. We show that, somewhat curiously, g_2(T) is also independent of the topology of T, and is equal to to the (n-1)th Fibonacci number. As we demonstrate, this topological neutrality subsequently breaks down for k >= 3. However, we show that for each fixed k >= 1, g_k(T) can be computed in O(n) time and the set of characters thus counted can be efficiently listed and sampled. We use these insights to give a simple but effective exact algorithm for the NP-hard maximum parsimony distance problem that runs in time $\Theta( \phi^{n} \cdot n^2 )$, where $\phi \approx 1.618...$ is the golden ratio, and an exact algorithm which computes the tree bisection and reconnection distance (equivalently, a maximum agreement forest) in time $\Theta( \phi^{2n}\cdot \text{poly}(n))$, where $\phi^2 \approx 2.619$. |
1011.5334 | Lionel Barnett | L. Barnett, C. L. Buckley and S. Bullock | A Graph Theoretic Interpretation of Neural Complexity | submitted Phys. Rev. E, Nov. 2010 | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the central challenges facing modern neuroscience is to explain the
ability of the nervous system to coherently integrate information across
distinct functional modules in the absence of a central executive. To this end
Tononi et al. [Proc. Nat. Acad. Sci. USA 91, 5033 (1994)] proposed a measure of
neural complexity that purports to capture this property based on mutual
information between complementary subsets of a system. Neural complexity, so
defined, is one of a family of information theoretic metrics developed to
measure the balance between the segregation and integration of a system's
dynamics. One key question arising for such measures involves understanding how
they are influenced by network topology. Sporns et al. [Cereb. Cortex 10, 127
(2000)] employed numerical models in order to determine the dependence of
neural complexity on the topological features of a network. However, a complete
picture has yet to be established. While De Lucia et al. [Phys. Rev. E 71,
016114 (2005)] made the first attempts at an analytical account of this
relationship, their work utilized a formulation of neural complexity that, we
argue, did not reflect the intuitions of the original work. In this paper we
start by describing weighted connection matrices formed by applying a random
continuous weight distribution to binary adjacency matrices. This allows us to
derive an approximation for neural complexity in terms of the moments of the
weight distribution and elementary graph motifs. In particular we explicitly
establish a dependency of neural complexity on cyclic graph motifs.
| [
{
"created": "Wed, 24 Nov 2010 10:54:08 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Nov 2010 12:00:12 GMT",
"version": "v2"
}
] | 2010-11-30 | [
[
"Barnett",
"L.",
""
],
[
"Buckley",
"C. L.",
""
],
[
"Bullock",
"S.",
""
]
] | One of the central challenges facing modern neuroscience is to explain the ability of the nervous system to coherently integrate information across distinct functional modules in the absence of a central executive. To this end Tononi et al. [Proc. Nat. Acad. Sci. USA 91, 5033 (1994)] proposed a measure of neural complexity that purports to capture this property based on mutual information between complementary subsets of a system. Neural complexity, so defined, is one of a family of information theoretic metrics developed to measure the balance between the segregation and integration of a system's dynamics. One key question arising for such measures involves understanding how they are influenced by network topology. Sporns et al. [Cereb. Cortex 10, 127 (2000)] employed numerical models in order to determine the dependence of neural complexity on the topological features of a network. However, a complete picture has yet to be established. While De Lucia et al. [Phys. Rev. E 71, 016114 (2005)] made the first attempts at an analytical account of this relationship, their work utilized a formulation of neural complexity that, we argue, did not reflect the intuitions of the original work. In this paper we start by describing weighted connection matrices formed by applying a random continuous weight distribution to binary adjacency matrices. This allows us to derive an approximation for neural complexity in terms of the moments of the weight distribution and elementary graph motifs. In particular we explicitly establish a dependency of neural complexity on cyclic graph motifs. |
2202.04823 | Ikbeom Jang | Ikbeom Jang, Garrison Danley, Ken Chang, Jayashree Kalpathy-Cramer | Decreasing Annotation Burden of Pairwise Comparisons with
Human-in-the-Loop Sorting: Application in Medical Image Artifact Rating | 5 pages, 2 figures, NeurIPS Data-Centric AI Workshop 2021 | null | null | null | q-bio.QM cs.CV cs.LG eess.IV | http://creativecommons.org/licenses/by/4.0/ | Ranking by pairwise comparisons has shown improved reliability over ordinal
classification. However, as the annotations of pairwise comparisons scale
quadratically, this becomes less practical when the dataset is large. We
propose a method for reducing the number of pairwise comparisons required to
rank by a quantitative metric, demonstrating the effectiveness of the approach
in ranking medical images by image quality in this proof of concept study.
Using the medical image annotation software that we developed, we actively
subsample pairwise comparisons using a sorting algorithm with a human rater in
the loop. We find that this method substantially reduces the number of
comparisons required for a full ordinal ranking without compromising
inter-rater reliability when compared to pairwise comparisons without sorting.
| [
{
"created": "Thu, 10 Feb 2022 04:02:45 GMT",
"version": "v1"
}
] | 2022-02-11 | [
[
"Jang",
"Ikbeom",
""
],
[
"Danley",
"Garrison",
""
],
[
"Chang",
"Ken",
""
],
[
"Kalpathy-Cramer",
"Jayashree",
""
]
] | Ranking by pairwise comparisons has shown improved reliability over ordinal classification. However, as the annotations of pairwise comparisons scale quadratically, this becomes less practical when the dataset is large. We propose a method for reducing the number of pairwise comparisons required to rank by a quantitative metric, demonstrating the effectiveness of the approach in ranking medical images by image quality in this proof of concept study. Using the medical image annotation software that we developed, we actively subsample pairwise comparisons using a sorting algorithm with a human rater in the loop. We find that this method substantially reduces the number of comparisons required for a full ordinal ranking without compromising inter-rater reliability when compared to pairwise comparisons without sorting. |
1111.6200 | Pascal Grange | Pascal Grange, Partha P. Mitra | Statistical analysis of co-expression properties of sets of genes in the
mouse brain | 11 pages, 3 figures; v2: 2 figures added, references added | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a quantitative method to estimate the statistical properties of
sets of genes for which expression data are available and co-registered to a
reference atlas of the brain. It is based on graph-theoretic properties of
co-expression coefficients between pairs of genes. We apply this method to
mouse genes from the Allen Gene Expression Atlas. Co-expression patterns of a
list of several hundreds of genes related to addiction are analyzed, using ISH
data produced for the mouse brain at the Allen Institute. It appears that large
subsets of this set of genes are much more highly co-expressed than expected by
chance.
| [
{
"created": "Sat, 26 Nov 2011 22:40:21 GMT",
"version": "v1"
},
{
"created": "Sun, 11 Dec 2011 22:54:52 GMT",
"version": "v2"
}
] | 2011-12-13 | [
[
"Grange",
"Pascal",
""
],
[
"Mitra",
"Partha P.",
""
]
] | We propose a quantitative method to estimate the statistical properties of sets of genes for which expression data are available and co-registered to a reference atlas of the brain. It is based on graph-theoretic properties of co-expression coefficients between pairs of genes. We apply this method to mouse genes from the Allen Gene Expression Atlas. Co-expression patterns of a list of several hundreds of genes related to addiction are analyzed, using ISH data produced for the mouse brain at the Allen Institute. It appears that large subsets of this set of genes are much more highly co-expressed than expected by chance. |
q-bio/0404016 | Silvia Scarpetta | Maria Marinaro and Silvia Scarpetta | Effects of Noise in a Cortical Neural Model | 25 pages, 10 figures, to appear in Phys. Rev. E | null | 10.1103/PhysRevE.70.041909 | null | q-bio.NC cond-mat.dis-nn | null | Recently Segev et al. (Phys. Rev. E 64,2001, Phys.Rev.Let. 88, 2002) made
long-term observations of spontaneous activity of in-vitro cortical networks,
which differ from predictions of current models in many features. In this paper
we generalize the EI cortical model introduced in a previous paper (S.Scarpetta
et al. Neural Comput. 14, 2002), including intrinsic white noise and analyzing
effects of noise on the spontaneous activity of the nonlinear system, in order
to account for the experimental results of Segev et al.. Analytically we can
distinguish different regimes of activity, depending from the model parameters.
Using analytical results as a guide line, we perform simulations of the
nonlinear stochastic model in two different regimes, B and C. The Power
Spectrum Density (PSD) of the activity and the Inter-Event-Interval (IEI)
distributions are computed, and compared with experimental results. In regime B
the network shows stochastic resonance phenomena and noise induces aperiodic
collective synchronous oscillations that mimic experimental observations at 0.5
mM Ca concentration. In regime C the model shows spontaneous synchronous
periodic activity that mimic activity observed at 1 mM Ca concentration and the
PSD shows two peaks at the 1st and 2nd harmonics in agreement with experiments
at 1 mM Ca. Moreover (due to intrinsic noise and nonlinear activation function
effects) the PSD shows a broad band peak at low frequency. This feature,
observed experimentally, does not find explanation in the previous models.
Besides we identify parametric changes (namely increase of noise or decreasing
of excitatory connections) that reproduces the fading of periodicity found
experimentally at long times, and we identify a way to discriminate between
those two possible effects measuring experimentally the low frequency PSD.
| [
{
"created": "Wed, 14 Apr 2004 20:56:11 GMT",
"version": "v1"
}
] | 2009-11-10 | [
[
"Marinaro",
"Maria",
""
],
[
"Scarpetta",
"Silvia",
""
]
] | Recently Segev et al. (Phys. Rev. E 64,2001, Phys.Rev.Let. 88, 2002) made long-term observations of spontaneous activity of in-vitro cortical networks, which differ from predictions of current models in many features. In this paper we generalize the EI cortical model introduced in a previous paper (S.Scarpetta et al. Neural Comput. 14, 2002), including intrinsic white noise and analyzing effects of noise on the spontaneous activity of the nonlinear system, in order to account for the experimental results of Segev et al.. Analytically we can distinguish different regimes of activity, depending from the model parameters. Using analytical results as a guide line, we perform simulations of the nonlinear stochastic model in two different regimes, B and C. The Power Spectrum Density (PSD) of the activity and the Inter-Event-Interval (IEI) distributions are computed, and compared with experimental results. In regime B the network shows stochastic resonance phenomena and noise induces aperiodic collective synchronous oscillations that mimic experimental observations at 0.5 mM Ca concentration. In regime C the model shows spontaneous synchronous periodic activity that mimic activity observed at 1 mM Ca concentration and the PSD shows two peaks at the 1st and 2nd harmonics in agreement with experiments at 1 mM Ca. Moreover (due to intrinsic noise and nonlinear activation function effects) the PSD shows a broad band peak at low frequency. This feature, observed experimentally, does not find explanation in the previous models. Besides we identify parametric changes (namely increase of noise or decreasing of excitatory connections) that reproduces the fading of periodicity found experimentally at long times, and we identify a way to discriminate between those two possible effects measuring experimentally the low frequency PSD. |
q-bio/0409014 | Yi Xiao | Ruizhen Xu, Yanzhao Huang, Mingfen Li, Hanlin Chen, Yi Xiao | Hidden symmetries in primary sequences of small alpha proteins | 12 pages, 14 figures | null | null | null | q-bio.BM | null | Proteins have regular tertiary structures but irregular amino acid sequences.
This made it very difficult to decode the structural information in the protein
sequences. Here we demonstrate that many small alpha protein domains have
hidden sequence symmetries characteristic of their pseudo-symmetric tertiary
structures. We also present a modified method of recurrent plot to reveal this
kind of the hidden sequence symmetry. The results may enable us understand
parts of the relations between protein sequences and their tertiary structures,
i.e, how the primary sequence of a protein determines its tertiary structure.
| [
{
"created": "Tue, 14 Sep 2004 00:30:59 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Xu",
"Ruizhen",
""
],
[
"Huang",
"Yanzhao",
""
],
[
"Li",
"Mingfen",
""
],
[
"Chen",
"Hanlin",
""
],
[
"Xiao",
"Yi",
""
]
] | Proteins have regular tertiary structures but irregular amino acid sequences. This made it very difficult to decode the structural information in the protein sequences. Here we demonstrate that many small alpha protein domains have hidden sequence symmetries characteristic of their pseudo-symmetric tertiary structures. We also present a modified method of recurrent plot to reveal this kind of the hidden sequence symmetry. The results may enable us understand parts of the relations between protein sequences and their tertiary structures, i.e, how the primary sequence of a protein determines its tertiary structure. |
1511.04523 | Ignacio Rodriguez-Brenes | Ignacio A Rodriguez-Brenes, Dominik Wodarz, Natalia L. Komarova | Cellular replication limits in the Luria-Delbr\"uck mutation model | Main text: 11 pages. Supplementary Information: 5 pages. Number of
figures: 2 | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Originally developed to elucidate the mechanisms of natural selection in
bacteria, the Luria-Delbr\"uck model assumed that cells are intrinsically
capable of dividing an unlimited number of times. This assumption however, is
not true for human somatic cells which undergo replicative senescence.
Replicative senescence is thought to act as a mechanism to protect against
cancer and the escape from it is a rate-limiting step in cancer progression.
Here we introduce a Luria-Delbr\"uck model that explicitly takes into account
cellular replication limits in the wild type cell population and models the
emergence of mutants that escape replicative senescence. We present results on
the mean, variance, distribution, and asymptotic behavior of the mutant
population in terms of three classical formulations of the problem. More
broadly the paper introduces the concept of incorporating replicative limits as
part of the Luria-Delbr\"uck mutational framework. Guidelines to extend the
theory to include other types of mutations and possible applications to the
modeling of telomere crisis and fluctuation analysis are also discussed.
| [
{
"created": "Sat, 14 Nov 2015 07:20:29 GMT",
"version": "v1"
}
] | 2015-11-17 | [
[
"Rodriguez-Brenes",
"Ignacio A",
""
],
[
"Wodarz",
"Dominik",
""
],
[
"Komarova",
"Natalia L.",
""
]
] | Originally developed to elucidate the mechanisms of natural selection in bacteria, the Luria-Delbr\"uck model assumed that cells are intrinsically capable of dividing an unlimited number of times. This assumption however, is not true for human somatic cells which undergo replicative senescence. Replicative senescence is thought to act as a mechanism to protect against cancer and the escape from it is a rate-limiting step in cancer progression. Here we introduce a Luria-Delbr\"uck model that explicitly takes into account cellular replication limits in the wild type cell population and models the emergence of mutants that escape replicative senescence. We present results on the mean, variance, distribution, and asymptotic behavior of the mutant population in terms of three classical formulations of the problem. More broadly the paper introduces the concept of incorporating replicative limits as part of the Luria-Delbr\"uck mutational framework. Guidelines to extend the theory to include other types of mutations and possible applications to the modeling of telomere crisis and fluctuation analysis are also discussed. |
2301.02661 | Yusheng Jiao | Yusheng Jiao, Brendan Colvert, Yi Man, Matthew J. McHenry, and Eva
Kanso | Evaluating Evasion Strategies in Zebrafish Larvae | 9 pages, 5 figures | null | 10.1073/pnas.2218909120 | null | q-bio.QM physics.flu-dyn q-bio.PE stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An effective evasion strategy allows prey to survive encounters with
predators. Prey are generally thought to escape in a direction that is either
random or serves to maximize the minimum distance from the predator. Here we
introduce a comprehensive approach to determine the most likely evasion
strategy among multiple hypotheses and the role of biomechanical constraints on
the escape response of prey fish. Through a consideration of six strategies
with sensorimotor noise and previous kinematic measurements, our analysis shows
that zebrafish larvae generally escape in a direction orthogonal to the
predator's heading. By sensing only the predator's heading, this orthogonal
strategy maximizes the distance from fast-moving predators, and, when operating
within the biomechanical constraints of the escape response, it provides the
best predictions of prey behavior among all alternatives. This work
demonstrates a framework for resolving the strategic basis of evastion in
predator-prey interactions, which could be applied to a broad diversity of
animals.
| [
{
"created": "Thu, 5 Jan 2023 08:04:01 GMT",
"version": "v1"
}
] | 2023-02-22 | [
[
"Jiao",
"Yusheng",
""
],
[
"Colvert",
"Brendan",
""
],
[
"Man",
"Yi",
""
],
[
"McHenry",
"Matthew J.",
""
],
[
"Kanso",
"Eva",
""
]
] | An effective evasion strategy allows prey to survive encounters with predators. Prey are generally thought to escape in a direction that is either random or serves to maximize the minimum distance from the predator. Here we introduce a comprehensive approach to determine the most likely evasion strategy among multiple hypotheses and the role of biomechanical constraints on the escape response of prey fish. Through a consideration of six strategies with sensorimotor noise and previous kinematic measurements, our analysis shows that zebrafish larvae generally escape in a direction orthogonal to the predator's heading. By sensing only the predator's heading, this orthogonal strategy maximizes the distance from fast-moving predators, and, when operating within the biomechanical constraints of the escape response, it provides the best predictions of prey behavior among all alternatives. This work demonstrates a framework for resolving the strategic basis of evastion in predator-prey interactions, which could be applied to a broad diversity of animals. |
1907.07515 | Sean Lawley | Sean D. Lawley and Jacob B. Madrid | A probabilistic approach to extreme statistics of Brownian escape times
in dimensions 1, 2, and 3 | 23 pages, 2 figures | Journal of Nonlinear Science, 2020 | 10.1007/s00332-019-09605-9 | null | q-bio.QM math.PR q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | First passage time (FPT) theory is often used to estimate timescales in
cellular and molecular biology. While the overwhelming majority of studies have
focused on the time it takes a given single Brownian searcher to reach a
target, cellular processes are instead often triggered by the arrival of the
first molecule out of many molecules. In these scenarios, the more relevant
timescale is the FPT of the first Brownian searcher to reach a target from a
large group of independent and identical Brownian searchers. Though the
searchers are identically distributed, one searcher will reach the target
before the others and will thus have the fastest FPT. This fastest FPT depends
on extremely rare events and its mean can be orders of magnitude faster than
the mean FPT of a given single searcher. In this paper, we use rigorous
probabilistic methods to study this fastest FPT. We determine the asymptotic
behavior of all the moments of this fastest FPT in the limit of many searchers
in a general class of two- and three-dimensional domains. We establish these
results by proving that the fastest searcher takes an almost direct path to the
target.
| [
{
"created": "Wed, 17 Jul 2019 13:42:17 GMT",
"version": "v1"
},
{
"created": "Fri, 3 Jan 2020 04:16:13 GMT",
"version": "v2"
}
] | 2020-03-13 | [
[
"Lawley",
"Sean D.",
""
],
[
"Madrid",
"Jacob B.",
""
]
] | First passage time (FPT) theory is often used to estimate timescales in cellular and molecular biology. While the overwhelming majority of studies have focused on the time it takes a given single Brownian searcher to reach a target, cellular processes are instead often triggered by the arrival of the first molecule out of many molecules. In these scenarios, the more relevant timescale is the FPT of the first Brownian searcher to reach a target from a large group of independent and identical Brownian searchers. Though the searchers are identically distributed, one searcher will reach the target before the others and will thus have the fastest FPT. This fastest FPT depends on extremely rare events and its mean can be orders of magnitude faster than the mean FPT of a given single searcher. In this paper, we use rigorous probabilistic methods to study this fastest FPT. We determine the asymptotic behavior of all the moments of this fastest FPT in the limit of many searchers in a general class of two- and three-dimensional domains. We establish these results by proving that the fastest searcher takes an almost direct path to the target. |
1802.06827 | Nadya Morozova | Andrey Minarsky, Nadya Morozova, Robert Penner and Christophe Soule | Theory of Morphogenesis | null | JOURNAL OF COMPUTATIONAL BIOLOGY Volume 25, 2018,p.1 | 10.1089/cmb.2017.0150 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A model of morphogenesis is proposed based on seven explicit postulates. The
mathematical import and biological significance of the postulates are explored
and discussed.
| [
{
"created": "Mon, 19 Feb 2018 19:57:10 GMT",
"version": "v1"
}
] | 2018-02-21 | [
[
"Minarsky",
"Andrey",
""
],
[
"Morozova",
"Nadya",
""
],
[
"Penner",
"Robert",
""
],
[
"Soule",
"Christophe",
""
]
] | A model of morphogenesis is proposed based on seven explicit postulates. The mathematical import and biological significance of the postulates are explored and discussed. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.