id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
q-bio/0309030 | Amit Manwani | A. Manwani and C. Koch (California Institute of Technology, Pasadena) | Synaptic Transmission: An Information-Theoretic Perspective | 7 pages, 4 figures, NIPS97 proceedings: neuroscience. Originally
submitted to the neuro-sys archive which was never publicly announced (was
9809002) | Advances in Neural Information Processing Systems 10, Michael I.
Jordan, Michael J. Kearns and Sara Solla (eds.), 1997 | null | null | q-bio.NC | null | Here we analyze synaptic transmission from an information-theoretic
perspective. We derive closed-form expressions for the lower-bounds on the
capacity of a simple model of a cortical synapse under two explicit coding
paradigms. Under the ``signal estimation'' paradigm, we assume the signal to be
encoded in the mean firing rate of a Poisson neuron. The performance of an
optimal linear estimator of the signal then provides a lower bound on the
capacity for signal estimation. Under the ``signal detection'' paradigm, the
presence or absence of the signal has to be detected. Performance of the
optimal spike detector allows us to compute a lower bound on the capacity for
signal detection. We find that single synapses (for empirically measured
parameter values) transmit information poorly but significant improvement can
be achieved with a small amount of redundancy.
| [
{
"created": "Tue, 22 Sep 1998 20:07:14 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Manwani",
"A.",
"",
"California Institute of Technology, Pasadena"
],
[
"Koch",
"C.",
"",
"California Institute of Technology, Pasadena"
]
] | Here we analyze synaptic transmission from an information-theoretic perspective. We derive closed-form expressions for the lower-bounds on the capacity of a simple model of a cortical synapse under two explicit coding paradigms. Under the ``signal estimation'' paradigm, we assume the signal to be encoded in the mean firing rate of a Poisson neuron. The performance of an optimal linear estimator of the signal then provides a lower bound on the capacity for signal estimation. Under the ``signal detection'' paradigm, the presence or absence of the signal has to be detected. Performance of the optimal spike detector allows us to compute a lower bound on the capacity for signal detection. We find that single synapses (for empirically measured parameter values) transmit information poorly but significant improvement can be achieved with a small amount of redundancy. |
2011.08308 | Leah B. Shaw | Fangming Xu, Leah B. Shaw, Junping Shi, Romuald N. Lipcius | Impacts of density-dependent predation, cannibalism and fishing in a
stage-structured population model of the blue crab in Chesapeake Bay | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The blue crab (Callinectes sapidus) is a dominant ecological species of high
commercial value. Spawning stock and recruitment of the Chesapeake Bay
population declined by 80% in the 1990s. After severe management actions were
implemented in 2008, female abundance rebounded to pre-1994 levels and
stabilized. The stepwise decline in the early 1990s, followed by a consistently
low level of abundance for 15 y and a jump to high abundance after 2008,
suggested the existence of alternative stable states. Alternatively, high
fishing pressure combined with low recruitment in 1992 could have triggered a
proportional decline in the population, followed by a population increase in
2008 due to rigorous management actions that reduced fishing. We evaluated
these alternatives with a stage-structured dynamic population model using
ordinary differential equations. In addition, stock assessment models assume
that fishing and mortality are independent of density. Hence, we also
investigated the role of density-dependent predation, cannibalism and fishing
in blue crab population dynamics. We conclude that for the blue crab population
in Chesapeake Bay: (1) bistable positive states are not likely with
biologically realistic parameter values; (2) hyperbolic (depensatory) fishing
will not produce extinction at the range of population densities observed in
the bay; and (3) crabs can survive a higher fishing rate under the more
realistic assumption of sigmoidal (density-dependent) predation and cannibalism
than under constant (density-independent) predation and cannibalism. These
collectively indicate that the blue crab population in Chesapeake Bay is
resilient to a range of biotic and abiotic disturbances.
| [
{
"created": "Mon, 16 Nov 2020 22:13:30 GMT",
"version": "v1"
}
] | 2020-11-18 | [
[
"Xu",
"Fangming",
""
],
[
"Shaw",
"Leah B.",
""
],
[
"Shi",
"Junping",
""
],
[
"Lipcius",
"Romuald N.",
""
]
] | The blue crab (Callinectes sapidus) is a dominant ecological species of high commercial value. Spawning stock and recruitment of the Chesapeake Bay population declined by 80% in the 1990s. After severe management actions were implemented in 2008, female abundance rebounded to pre-1994 levels and stabilized. The stepwise decline in the early 1990s, followed by a consistently low level of abundance for 15 y and a jump to high abundance after 2008, suggested the existence of alternative stable states. Alternatively, high fishing pressure combined with low recruitment in 1992 could have triggered a proportional decline in the population, followed by a population increase in 2008 due to rigorous management actions that reduced fishing. We evaluated these alternatives with a stage-structured dynamic population model using ordinary differential equations. In addition, stock assessment models assume that fishing and mortality are independent of density. Hence, we also investigated the role of density-dependent predation, cannibalism and fishing in blue crab population dynamics. We conclude that for the blue crab population in Chesapeake Bay: (1) bistable positive states are not likely with biologically realistic parameter values; (2) hyperbolic (depensatory) fishing will not produce extinction at the range of population densities observed in the bay; and (3) crabs can survive a higher fishing rate under the more realistic assumption of sigmoidal (density-dependent) predation and cannibalism than under constant (density-independent) predation and cannibalism. These collectively indicate that the blue crab population in Chesapeake Bay is resilient to a range of biotic and abiotic disturbances. |
1911.10452 | Toan T. Nguyen | Ly Hai Nguyen, Tuyen Thanh Tran, Lien Ngoc Thi Truong, Hanh Hong Mai,
and Toan T. Nguyen | Overcharging of zinc ion in the structure of zinc finger protein is
needed for DNA binding stability | 33 pages, 9 figures, improved presentation | null | 10.1021/acs.biochem.9b01055 | null | q-bio.BM cond-mat.soft | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The zinc finger structure where a Zn2+ ion binds to 4 cysteine or histidine
amino acids in a tetrahedral structure is very common motif of nucleic acid
binding proteins. The corresponding interaction model is present in 3% of the
genes of human genome. As a result, zinc finger has been shown to be extremely
useful in various therapeutic and research capacities, as well as in
biotechnology. In stable configuration, the cysteine amino acids are
deprotonated and become negatively charged. This means the Zn2+ ion is
overscreened by 4 cysteine charges (overcharged). It is question of whether
this overcharged configuration is also stable when such negatively charged zinc
finger binds to negatively charged DNA molecule. Using all atom molecular
dynamics simulation up to microsecond range of an androgen receptor protein
dimer, we investigate how the deprotonated state of cysteine influences its
structure, dynamics, and function in binding o DNA molecules. Our results show
that the deprotonated state of cysteine residues are essential for mechanical
stabilization of the functional, folded conformation. Not only this state
stabilizes the protein structure, it also stabilizes the protein-DNA binding
complex. The differences in structural and energetic properties of the two
(sequence-identical) monomers are also investigated showing the strong
influence of DNA on the structure of zinc fingers upon complexation. Our result
has potential impact on better molecular understanding of one of the most
common classes of zinc fingers
| [
{
"created": "Sun, 24 Nov 2019 03:30:34 GMT",
"version": "v1"
},
{
"created": "Sat, 22 Feb 2020 04:55:26 GMT",
"version": "v2"
}
] | 2020-02-25 | [
[
"Nguyen",
"Ly Hai",
""
],
[
"Tran",
"Tuyen Thanh",
""
],
[
"Truong",
"Lien Ngoc Thi",
""
],
[
"Mai",
"Hanh Hong",
""
],
[
"Nguyen",
"Toan T.",
""
]
] | The zinc finger structure where a Zn2+ ion binds to 4 cysteine or histidine amino acids in a tetrahedral structure is very common motif of nucleic acid binding proteins. The corresponding interaction model is present in 3% of the genes of human genome. As a result, zinc finger has been shown to be extremely useful in various therapeutic and research capacities, as well as in biotechnology. In stable configuration, the cysteine amino acids are deprotonated and become negatively charged. This means the Zn2+ ion is overscreened by 4 cysteine charges (overcharged). It is question of whether this overcharged configuration is also stable when such negatively charged zinc finger binds to negatively charged DNA molecule. Using all atom molecular dynamics simulation up to microsecond range of an androgen receptor protein dimer, we investigate how the deprotonated state of cysteine influences its structure, dynamics, and function in binding o DNA molecules. Our results show that the deprotonated state of cysteine residues are essential for mechanical stabilization of the functional, folded conformation. Not only this state stabilizes the protein structure, it also stabilizes the protein-DNA binding complex. The differences in structural and energetic properties of the two (sequence-identical) monomers are also investigated showing the strong influence of DNA on the structure of zinc fingers upon complexation. Our result has potential impact on better molecular understanding of one of the most common classes of zinc fingers |
1505.00775 | Danko Nikolic | Danko Nikoli\'c | Only T3-AI can reach human-level intelligence: A variety argument | 18 page, 6800 words, no figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recently introduced theory of practopoiesis offers an account on how
adaptive intelligent systems are organized. According to that theory biological
agents adapt at three levels of organization and this structure applies also to
our brains. This is referred to as tri-traversal theory of the organization of
mind or for short, a T3-structure. To implement a similar T3-organization in an
artificially intelligent agent, it is necessary to have multiple policies, as
usually used as a concept in the theory of reinforcement learning. These
policies have to form a hierarchy. We define adaptive practopoietic systems in
terms of hierarchy of policies and calculate whether the total variety of
behavior required by real-life conditions of an adult human can be
satisfactorily accounted for by a traditional approach to artificial
intelligence based on T2-agents, or whether a T3-agent is needed instead. We
conclude that the complexity of real life can be dealt with appropriately only
by a T3-agent.
| [
{
"created": "Sat, 2 May 2015 12:58:02 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Jul 2015 09:29:50 GMT",
"version": "v2"
}
] | 2015-07-17 | [
[
"Nikolić",
"Danko",
""
]
] | The recently introduced theory of practopoiesis offers an account on how adaptive intelligent systems are organized. According to that theory biological agents adapt at three levels of organization and this structure applies also to our brains. This is referred to as tri-traversal theory of the organization of mind or for short, a T3-structure. To implement a similar T3-organization in an artificially intelligent agent, it is necessary to have multiple policies, as usually used as a concept in the theory of reinforcement learning. These policies have to form a hierarchy. We define adaptive practopoietic systems in terms of hierarchy of policies and calculate whether the total variety of behavior required by real-life conditions of an adult human can be satisfactorily accounted for by a traditional approach to artificial intelligence based on T2-agents, or whether a T3-agent is needed instead. We conclude that the complexity of real life can be dealt with appropriately only by a T3-agent. |
1501.07854 | Subhadip Raychaudhuri | Subhadip Raychaudhuri | Kinetic Monte Carlo study of the type1/type 2 choice in apoptosis
elucidates selective killing of cancer cells under death ligand induction | 31 pages, 11 figures | OJApo 4 (2015) 22-39 | 10.4236/ojapo.2015.41003 | null | q-bio.CB physics.bio-ph | http://creativecommons.org/licenses/by/3.0/ | Death ligand mediated apoptotic activation is a mode of programmed cell death
that is widely used in cellular and physiological situations. Interest in
studying death ligand induced apoptosis has increased due to the promising role
of recombinant soluble forms of death ligands (mainly recombinant TRAIL) in
anti-cancer therapy. A clear elucidation of how death ligands activate the type
1 and type 2 apoptotic pathways in healthy and cancer cells may help develop
better chemotherapeutic strategies. In this work, we use kinetic Monte Carlo
simulations to address the problem of type 1/ type 2 choice in death ligand
mediated apoptosis of cancer cells. Our study provides insights into the
activation of membrane proximal death module that results from complex
interplay between death and decoy receptors. Relative abundance of death and
decoy receptors was shown to be a key parameter for activation of the initiator
caspases in the membrane module. Increased concentration of death ligands
frequently increased the type 1 activation fraction in cancer cells, and, in
certain cases changes the signaling phenotype from type 2 to type 1. Results of
this study also indicate that inherent differences between cancer and healthy
cells, such as in the membrane module, may allow robust activation of cancer
cell apoptosis by death ligand induction. At the same time, large cell-to-cell
variability through the type 2 pathway was shown to provide protection for
healthy cells. Such elucidation of selective activation of apoptosis in cancer
cells addresses a key question in cancer biology and cancer therapy.
| [
{
"created": "Fri, 30 Jan 2015 17:29:26 GMT",
"version": "v1"
}
] | 2015-02-02 | [
[
"Raychaudhuri",
"Subhadip",
""
]
] | Death ligand mediated apoptotic activation is a mode of programmed cell death that is widely used in cellular and physiological situations. Interest in studying death ligand induced apoptosis has increased due to the promising role of recombinant soluble forms of death ligands (mainly recombinant TRAIL) in anti-cancer therapy. A clear elucidation of how death ligands activate the type 1 and type 2 apoptotic pathways in healthy and cancer cells may help develop better chemotherapeutic strategies. In this work, we use kinetic Monte Carlo simulations to address the problem of type 1/ type 2 choice in death ligand mediated apoptosis of cancer cells. Our study provides insights into the activation of membrane proximal death module that results from complex interplay between death and decoy receptors. Relative abundance of death and decoy receptors was shown to be a key parameter for activation of the initiator caspases in the membrane module. Increased concentration of death ligands frequently increased the type 1 activation fraction in cancer cells, and, in certain cases changes the signaling phenotype from type 2 to type 1. Results of this study also indicate that inherent differences between cancer and healthy cells, such as in the membrane module, may allow robust activation of cancer cell apoptosis by death ligand induction. At the same time, large cell-to-cell variability through the type 2 pathway was shown to provide protection for healthy cells. Such elucidation of selective activation of apoptosis in cancer cells addresses a key question in cancer biology and cancer therapy. |
1705.07441 | Alireza Alemi | Alireza Alemi and Alia Abbara | Exponential Capacity in an Autoencoder Neural Network with a Hidden
Layer | 3 figures, 14 pages | null | null | null | q-bio.NC cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A fundamental aspect of limitations in learning any computation in neural
architectures is characterizing their optimal capacities.
An important, widely-used neural architecture is known as autoencoders where
the network reconstructs the input at the output layer via a representation at
a hidden layer.
Even though capacities of several neural architectures have been addressed
using statistical physics methods, the capacity of autoencoder neural networks
is not well-explored.
Here, we analytically show that an autoencoder network of binary neurons with
a hidden layer can achieve a capacity that grows exponentially with network
size.
The network has fixed random weights encoding a set of dense input patterns
into a dense, expanded (or \emph{overcomplete}) hidden layer representation. A
set of learnable weights decodes the input patters at the output layer. We
perform a mean-field approximation of the model to reduce the model to a
perceptron problem with an input-output dependency. Carrying out Gardner's
\emph{replica} calculation, we show that as the expansion ratio, defined as the
number of hidden units over the number of input units, increases, the
autoencoding capacity grows exponentially even when the sparseness or the
coding level of the hidden layer representation is changed. The
replica-symmetric solution is locally stable and is in good agreement with
simulation results obtained using a local learning rule. In addition, the
degree of symmetry between the encoding and decoding weights monotonically
increases with the expansion ratio.
| [
{
"created": "Sun, 21 May 2017 12:13:42 GMT",
"version": "v1"
}
] | 2017-05-23 | [
[
"Alemi",
"Alireza",
""
],
[
"Abbara",
"Alia",
""
]
] | A fundamental aspect of limitations in learning any computation in neural architectures is characterizing their optimal capacities. An important, widely-used neural architecture is known as autoencoders where the network reconstructs the input at the output layer via a representation at a hidden layer. Even though capacities of several neural architectures have been addressed using statistical physics methods, the capacity of autoencoder neural networks is not well-explored. Here, we analytically show that an autoencoder network of binary neurons with a hidden layer can achieve a capacity that grows exponentially with network size. The network has fixed random weights encoding a set of dense input patterns into a dense, expanded (or \emph{overcomplete}) hidden layer representation. A set of learnable weights decodes the input patters at the output layer. We perform a mean-field approximation of the model to reduce the model to a perceptron problem with an input-output dependency. Carrying out Gardner's \emph{replica} calculation, we show that as the expansion ratio, defined as the number of hidden units over the number of input units, increases, the autoencoding capacity grows exponentially even when the sparseness or the coding level of the hidden layer representation is changed. The replica-symmetric solution is locally stable and is in good agreement with simulation results obtained using a local learning rule. In addition, the degree of symmetry between the encoding and decoding weights monotonically increases with the expansion ratio. |
2207.07274 | Aaron Wang | Aaron Wang, Feng Li, Samantha Chiang, Jennifer Fulcher, Otto Yang,
David Wong, Fang Wei | Machine Learning Prediction of COVID-19 Severity Levels From Salivaomics
Data | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | The clinical spectrum of severe acute respiratory syndrome coronavirus 2
(SARS-CoV-2), the strain of coronavirus that caused the COVID-19 pandemic, is
broad, extending from asymptomatic infection to severe immunopulmonary
reactions that, if not categorized properly, may be life-threatening.
Researchers rate COVID-19 patients on a scale from 1 to 8 according to the
severity level of COVID-19, 1 being healthy and 8 being extremely sick, based
on a multitude of factors including number of clinic visits, days since the
first sign of symptoms, and more. However, there are two issues with the
current state of severity level designation. Firstly, there exists variation
among researchers in determining these patient scores, which may lead to
improper treatment. Secondly, researchers use a variety of metrics to determine
patient severity level, including metrics involving plasma collection that
require invasive procedures. This project aims to remedy both issues by
introducing a machine learning framework that unifies severity level
designations based on noninvasive saliva biomarkers. Our results show that we
can successfully use machine learning on salivaomics data to predict the
severity level of COVID-19 patients, indicating the presence of viral load
using saliva biomarkers.
| [
{
"created": "Fri, 15 Jul 2022 03:45:40 GMT",
"version": "v1"
}
] | 2022-07-18 | [
[
"Wang",
"Aaron",
""
],
[
"Li",
"Feng",
""
],
[
"Chiang",
"Samantha",
""
],
[
"Fulcher",
"Jennifer",
""
],
[
"Yang",
"Otto",
""
],
[
"Wong",
"David",
""
],
[
"Wei",
"Fang",
""
]
] | The clinical spectrum of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the strain of coronavirus that caused the COVID-19 pandemic, is broad, extending from asymptomatic infection to severe immunopulmonary reactions that, if not categorized properly, may be life-threatening. Researchers rate COVID-19 patients on a scale from 1 to 8 according to the severity level of COVID-19, 1 being healthy and 8 being extremely sick, based on a multitude of factors including number of clinic visits, days since the first sign of symptoms, and more. However, there are two issues with the current state of severity level designation. Firstly, there exists variation among researchers in determining these patient scores, which may lead to improper treatment. Secondly, researchers use a variety of metrics to determine patient severity level, including metrics involving plasma collection that require invasive procedures. This project aims to remedy both issues by introducing a machine learning framework that unifies severity level designations based on noninvasive saliva biomarkers. Our results show that we can successfully use machine learning on salivaomics data to predict the severity level of COVID-19 patients, indicating the presence of viral load using saliva biomarkers. |
1809.09700 | Audrey Wong-Kee-You MA | Audrey M. B. Wong-Kee-You, John K. Tsotsos, and Scott A. Adler | Development of spatial suppression surrounding the focus of visual
attention | null | null | null | null | q-bio.NC cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The capacity to filter out irrelevant information from our environment is
critical to efficient processing. Yet, during development, when building a
knowledge base of the world is occurring, the ability to selectively allocate
attentional resources is limited (e.g., Amso & Scerif, 2015). In adulthood,
research has demonstrated that surrounding the spatial location of attentional
focus is a suppressive field, resulting from top-down attention promoting the
processing of relevant stimuli and inhibiting surrounding distractors (e.g.,
Hopf et al., 2006). It is not fully known, however, whether this phenomenon
manifests in development. In the current study, we examined whether spatial
suppression surrounding the focus of visual attention is exhibited in
developmental age groups. Participants between 12 and 27 years of age exhibited
spatial suppression surrounding their focus of visual attention. Their accuracy
increased as a function of the separation distance between a spatially cued
(and attended) target and a second target, suggesting that a ring of
suppression surrounded the attended target. When a central cue was instead
presented and therefore attention was no longer spatially cued, surround
suppression was not observed, indicating that our initial findings of
suppression were indeed related to the focus of attention. Attentional surround
suppression was not observed in 8- to 11-years-olds, even with a longer spatial
cue presentation time, demonstrating that the lack of the effect at these ages
is not due to slowed attentional feedback processes. Our findings demonstrate
that top-down attentional processes are still immature until approximately 12
years of age, and that they continue to be refined throughout adolescence,
converging well with previous research on attentional development.
| [
{
"created": "Mon, 17 Sep 2018 01:35:56 GMT",
"version": "v1"
}
] | 2018-09-27 | [
[
"Wong-Kee-You",
"Audrey M. B.",
""
],
[
"Tsotsos",
"John K.",
""
],
[
"Adler",
"Scott A.",
""
]
] | The capacity to filter out irrelevant information from our environment is critical to efficient processing. Yet, during development, when building a knowledge base of the world is occurring, the ability to selectively allocate attentional resources is limited (e.g., Amso & Scerif, 2015). In adulthood, research has demonstrated that surrounding the spatial location of attentional focus is a suppressive field, resulting from top-down attention promoting the processing of relevant stimuli and inhibiting surrounding distractors (e.g., Hopf et al., 2006). It is not fully known, however, whether this phenomenon manifests in development. In the current study, we examined whether spatial suppression surrounding the focus of visual attention is exhibited in developmental age groups. Participants between 12 and 27 years of age exhibited spatial suppression surrounding their focus of visual attention. Their accuracy increased as a function of the separation distance between a spatially cued (and attended) target and a second target, suggesting that a ring of suppression surrounded the attended target. When a central cue was instead presented and therefore attention was no longer spatially cued, surround suppression was not observed, indicating that our initial findings of suppression were indeed related to the focus of attention. Attentional surround suppression was not observed in 8- to 11-years-olds, even with a longer spatial cue presentation time, demonstrating that the lack of the effect at these ages is not due to slowed attentional feedback processes. Our findings demonstrate that top-down attentional processes are still immature until approximately 12 years of age, and that they continue to be refined throughout adolescence, converging well with previous research on attentional development. |
1410.3951 | Eleni Katifori | Carl D. Modes, Marcelo O. Magnasco, Eleni Katifori | Extracting Hidden Hierarchies in 3D Distribution Networks | null | Phys. Rev. X 6, 031009 (2016) | 10.1103/PhysRevX.6.031009 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Natural and man-made transport webs are frequently dominated by dense sets of
nested cycles. The architecture of these networks, as defined by the topology
and edge weights, determines how efficiently the networks perform their
function. Yet, the set of tools that can characterize such a weighted
cycle-rich architecture in a physically relevant, mathematically compact way is
sparse. In order to fill this void, we have developed a new algorithm that
rests on an abstraction of the physical `tiling' in the case of a two
dimensional network to an effective tiling of an abstract surface in space that
the network may be thought to sit in. Generically these abstract surfaces are
richer than the flat plane and as a result there are now two families of
fundamental units that may aggregate upon cutting weakest links -- the
plaquettes of the tiling and the longer `topological' cycles associated with
the abstract surface itself. Upon sequential removal of the weakest links, as
determined by the edge weight, neighboring plaquettes merge and a tree
characterizing this merging process results. The properties of this
characteristic tree can provide the physical and topological data required to
describe the architecture of the network and to build physical models. The new
algorithm can be used for automated phenotypic characterization of any weighted
network whose structure is dominated by cycles, such as mammalian vasculature
in the organs, the root networks of clonal colonies like quaking aspen, or the
force networks in jammed granular matter.
| [
{
"created": "Wed, 15 Oct 2014 07:41:04 GMT",
"version": "v1"
}
] | 2016-07-27 | [
[
"Modes",
"Carl D.",
""
],
[
"Magnasco",
"Marcelo O.",
""
],
[
"Katifori",
"Eleni",
""
]
] | Natural and man-made transport webs are frequently dominated by dense sets of nested cycles. The architecture of these networks, as defined by the topology and edge weights, determines how efficiently the networks perform their function. Yet, the set of tools that can characterize such a weighted cycle-rich architecture in a physically relevant, mathematically compact way is sparse. In order to fill this void, we have developed a new algorithm that rests on an abstraction of the physical `tiling' in the case of a two dimensional network to an effective tiling of an abstract surface in space that the network may be thought to sit in. Generically these abstract surfaces are richer than the flat plane and as a result there are now two families of fundamental units that may aggregate upon cutting weakest links -- the plaquettes of the tiling and the longer `topological' cycles associated with the abstract surface itself. Upon sequential removal of the weakest links, as determined by the edge weight, neighboring plaquettes merge and a tree characterizing this merging process results. The properties of this characteristic tree can provide the physical and topological data required to describe the architecture of the network and to build physical models. The new algorithm can be used for automated phenotypic characterization of any weighted network whose structure is dominated by cycles, such as mammalian vasculature in the organs, the root networks of clonal colonies like quaking aspen, or the force networks in jammed granular matter. |
2310.19202 | Xiong Xiong | Xiong Xiong, Ying Wang, Tianyuan Song, Jinguo Huang, Guixia Kang | Improved Motor Imagery Classification Using Adaptive Spatial Filters
Based on Particle Swarm Optimization Algorithm | 25 pages, 8 figures | null | null | null | q-bio.QM cs.LG eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a typical self-paced brain-computer interface (BCI) system, the motor
imagery (MI) BCI has been widely applied in fields such as robot control,
stroke rehabilitation, and assistance for patients with stroke or spinal cord
injury. Many studies have focused on the traditional spatial filters obtained
through the common spatial pattern (CSP) method. However, the CSP method can
only obtain fixed spatial filters for specific input signals. Besides, CSP
method only focuses on the variance difference of two types of
electroencephalogram (EEG) signals, so the decoding ability of EEG signals is
limited. To obtain more effective spatial filters for better extraction of
spatial features that can improve classification to MI-EEG, this paper proposes
an adaptive spatial filter solving method based on particle swarm optimization
algorithm (PSO). A training and testing framework based on filter bank and
spatial filters (FBCSP-ASP) is designed for MI EEG signal classification.
Comparative experiments are conducted on two public datasets (2a and 2b) from
BCI competition IV, which show the outstanding average recognition accuracy of
FBCSP-ASP. The proposed method has achieved significant performance improvement
on MI-BCI. The classification accuracy of the proposed method has reached
74.61% and 81.19% on datasets 2a and 2b, respectively. Compared with the
baseline algorithm (FBCSP), the proposed algorithm improves 11.44% and 7.11% on
two datasets respectively. Furthermore, the analysis based on mutual
information, t-SNE and Shapley values further proves that ASP features have
excellent decoding ability for MI-EEG signals, and explains the improvement of
classification performance by the introduction of ASP features.
| [
{
"created": "Sun, 29 Oct 2023 23:53:37 GMT",
"version": "v1"
}
] | 2023-10-31 | [
[
"Xiong",
"Xiong",
""
],
[
"Wang",
"Ying",
""
],
[
"Song",
"Tianyuan",
""
],
[
"Huang",
"Jinguo",
""
],
[
"Kang",
"Guixia",
""
]
] | As a typical self-paced brain-computer interface (BCI) system, the motor imagery (MI) BCI has been widely applied in fields such as robot control, stroke rehabilitation, and assistance for patients with stroke or spinal cord injury. Many studies have focused on the traditional spatial filters obtained through the common spatial pattern (CSP) method. However, the CSP method can only obtain fixed spatial filters for specific input signals. Besides, CSP method only focuses on the variance difference of two types of electroencephalogram (EEG) signals, so the decoding ability of EEG signals is limited. To obtain more effective spatial filters for better extraction of spatial features that can improve classification to MI-EEG, this paper proposes an adaptive spatial filter solving method based on particle swarm optimization algorithm (PSO). A training and testing framework based on filter bank and spatial filters (FBCSP-ASP) is designed for MI EEG signal classification. Comparative experiments are conducted on two public datasets (2a and 2b) from BCI competition IV, which show the outstanding average recognition accuracy of FBCSP-ASP. The proposed method has achieved significant performance improvement on MI-BCI. The classification accuracy of the proposed method has reached 74.61% and 81.19% on datasets 2a and 2b, respectively. Compared with the baseline algorithm (FBCSP), the proposed algorithm improves 11.44% and 7.11% on two datasets respectively. Furthermore, the analysis based on mutual information, t-SNE and Shapley values further proves that ASP features have excellent decoding ability for MI-EEG signals, and explains the improvement of classification performance by the introduction of ASP features. |
2001.10614 | Arti Ahluwalia Dr | Daniele Poli1, Giorgio Mattei, Nadia Ucciferri and Arti Ahluwalia | An integrated in vitro in silico approach for silver nanoparticle
dosimetry in cell cultures | 20 pages, including Supplementary Materials | null | null | null | q-bio.TO q-bio.QM | http://creativecommons.org/publicdomain/zero/1.0/ | Potential human and environmental hazards resulting from the exposure of
living organisms to silver nanoparticles (Ag NPs) have been the subject of
intensive discussion in the last decade. Despite the growing use of Ag NPs in
biomedical applications, a quantification of the toxic effects as a function of
the total silver mass reaching cells (namely, target cell dose) is still
needed. To provide a more accurate dose-response analysis, we propose a novel
integrated approach combining well-established computational and experimental
methodologies. We first used the particokinetic model (ISD3) proposed by Thomas
and colleagues (2018) for providing experimental validation of computed Ag NP
sedimentation in static-cuvette experiments. After validation, ISD3 was
employed to predict the total mass of silver reaching human endothelial cells
and hepatocytes cultured in 96 well plates. Cell viability measured after 24h
of culture was then related to this target cell dose. Our results show that the
dose perceived by the cell monolayer after 24 h of exposure is around 85% lower
than the administered nominal media concentration. Therefore, accurate
dosimetry considering particle characteristics and experimental conditions
(e.g., time, size and shape of wells) should be employed for better
interpreting effects induced by the amount of silver reaching cells.
| [
{
"created": "Tue, 7 Jan 2020 11:43:40 GMT",
"version": "v1"
}
] | 2020-01-30 | [
[
"Poli1",
"Daniele",
""
],
[
"Mattei",
"Giorgio",
""
],
[
"Ucciferri",
"Nadia",
""
],
[
"Ahluwalia",
"Arti",
""
]
] | Potential human and environmental hazards resulting from the exposure of living organisms to silver nanoparticles (Ag NPs) have been the subject of intensive discussion in the last decade. Despite the growing use of Ag NPs in biomedical applications, a quantification of the toxic effects as a function of the total silver mass reaching cells (namely, target cell dose) is still needed. To provide a more accurate dose-response analysis, we propose a novel integrated approach combining well-established computational and experimental methodologies. We first used the particokinetic model (ISD3) proposed by Thomas and colleagues (2018) for providing experimental validation of computed Ag NP sedimentation in static-cuvette experiments. After validation, ISD3 was employed to predict the total mass of silver reaching human endothelial cells and hepatocytes cultured in 96 well plates. Cell viability measured after 24h of culture was then related to this target cell dose. Our results show that the dose perceived by the cell monolayer after 24 h of exposure is around 85% lower than the administered nominal media concentration. Therefore, accurate dosimetry considering particle characteristics and experimental conditions (e.g., time, size and shape of wells) should be employed for better interpreting effects induced by the amount of silver reaching cells. |
1911.04779 | Annick Lesne | Julien Mozziconacci (LPTMC, MNHN), M\'elody Merle (LPTMC), Annick
Lesne (LPTMC, IGMM) | The 3D genome shapes the regulatory code of developmental genes | Journal of Molecular Biology, Elsevier, 2019 | null | 10.1016/j.jmb.2019.10.017 | null | q-bio.GN physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We revisit the notion of gene regulatory code in embryonic development in the
light of recent findings about genome spatial organisation. By analogy with the
genetic code, we posit that the concept of code can only be used if the
corresponding adaptor can clearly be identified. An adaptor is here defined as
an intermediary physical entity mediating the correspondence between codewords
and objects in a gratuitous and evolvable way. In the context of the gene
regulatory code, the encoded objects are the gene expression levels, while the
concentrations of specific transcription factors in the cell nucleus provide
the codewords. The notion of code is meaningful in the absence of direct
physicochemical relationships between the objects and the codewords, when the
mediation by an adaptor is required. We propose that a plausible adaptor for
this code is the gene domain, that is, the genome segment delimited by
topological insulators and comprising the gene and its enhancer regulatory
sequences. We review recent evidences, based on genome-wide chromosome
conformation capture experiments, showing that preferential contact domains
found in metazoan genomes are the physical traces of gene domains. Accordingly,
genome 3D folding plays a direct role in shaping the developmental gene
regulatory code.
| [
{
"created": "Tue, 12 Nov 2019 10:36:56 GMT",
"version": "v1"
}
] | 2019-11-13 | [
[
"Mozziconacci",
"Julien",
"",
"LPTMC, MNHN"
],
[
"Merle",
"Mélody",
"",
"LPTMC"
],
[
"Lesne",
"Annick",
"",
"LPTMC, IGMM"
]
] | We revisit the notion of gene regulatory code in embryonic development in the light of recent findings about genome spatial organisation. By analogy with the genetic code, we posit that the concept of code can only be used if the corresponding adaptor can clearly be identified. An adaptor is here defined as an intermediary physical entity mediating the correspondence between codewords and objects in a gratuitous and evolvable way. In the context of the gene regulatory code, the encoded objects are the gene expression levels, while the concentrations of specific transcription factors in the cell nucleus provide the codewords. The notion of code is meaningful in the absence of direct physicochemical relationships between the objects and the codewords, when the mediation by an adaptor is required. We propose that a plausible adaptor for this code is the gene domain, that is, the genome segment delimited by topological insulators and comprising the gene and its enhancer regulatory sequences. We review recent evidences, based on genome-wide chromosome conformation capture experiments, showing that preferential contact domains found in metazoan genomes are the physical traces of gene domains. Accordingly, genome 3D folding plays a direct role in shaping the developmental gene regulatory code. |
1202.1223 | Francesc Rossell\'o | Arnau Mir, Francesc Rossello, Lucia Rotger | A new balance index for phylogenetic trees | 24 pages, 2 figures, preliminary version presented at the JBI 2012 | Math. Biosc. 241 (2013) 125-136 | 10.1016/j.mbs.2012.10.005 | null | q-bio.PE cs.DM q-bio.QM | http://creativecommons.org/licenses/publicdomain/ | Several indices that measure the degree of balance of a rooted phylogenetic
tree have been proposed so far in the literature. In this work we define and
study a new index of this kind, which we call the total cophenetic index: the
sum, over all pairs of different leaves, of the depth of their least common
ancestor. This index makes sense for arbitrary trees, can be computed in linear
time and it has a larger range of values and a greater resolution power than
other indices like Colless' or Sackin's. We compute its maximum and minimum
values for arbitrary and binary trees, as well as exact formulas for its
expected value for binary trees under the Yule and the uniform models of
evolution. As a byproduct of this study, we obtain an exact formula for the
expected value of the Sackin index under the uniform model, a result that seems
to be new in the literature.
| [
{
"created": "Mon, 6 Feb 2012 17:47:01 GMT",
"version": "v1"
}
] | 2014-02-11 | [
[
"Mir",
"Arnau",
""
],
[
"Rossello",
"Francesc",
""
],
[
"Rotger",
"Lucia",
""
]
] | Several indices that measure the degree of balance of a rooted phylogenetic tree have been proposed so far in the literature. In this work we define and study a new index of this kind, which we call the total cophenetic index: the sum, over all pairs of different leaves, of the depth of their least common ancestor. This index makes sense for arbitrary trees, can be computed in linear time and it has a larger range of values and a greater resolution power than other indices like Colless' or Sackin's. We compute its maximum and minimum values for arbitrary and binary trees, as well as exact formulas for its expected value for binary trees under the Yule and the uniform models of evolution. As a byproduct of this study, we obtain an exact formula for the expected value of the Sackin index under the uniform model, a result that seems to be new in the literature. |
0901.1851 | Stefan Auer SA | Stefan Auer, Filip Meersman, Christopher M. Dobson, Michele
Vendruscolo | Generic Mechanism of Emergence of Amyloid Protofilaments from Disordered
Oligomeric aggregates | 14 pages, 4 figures | PLoS Comput Biol 4(11): e1000222 (2008) | 10.1371/journal.pcbi.1000222 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The presence of oligomeric aggregates, which is often observed during the
process of amyloid formation, has recently attracted much attention since it
has been associated with neurodegenerative conditions such as Alzheimer's and
Parkinson's diseases. We provide a description of a sequence-indepedent
mechanism by which polypeptide chains aggregate by forming metastable
oligomeric intermediate states prior to converting into fibrillar structures.
Our results illustrate how the formation of ordered arrays of hydrogen bonds
drives the formation of beta-sheets within the disordered oligomeric aggregates
that form early under the effect of hydrophobic forces. Initially individual
beta-sheets form with random orientations, which subsequently tend to align
into protofilaments as their lengths increases. Our results suggest that
amyloid aggregation represents an example of the Ostwald step rule of first
order phase transitions by showing that ordered cross-beta structures emerge
preferentially from disordered compact dynamical intermediate assemblies.
| [
{
"created": "Tue, 13 Jan 2009 18:23:18 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Jan 2009 09:48:36 GMT",
"version": "v2"
}
] | 2009-01-14 | [
[
"Auer",
"Stefan",
""
],
[
"Meersman",
"Filip",
""
],
[
"Dobson",
"Christopher M.",
""
],
[
"Vendruscolo",
"Michele",
""
]
] | The presence of oligomeric aggregates, which is often observed during the process of amyloid formation, has recently attracted much attention since it has been associated with neurodegenerative conditions such as Alzheimer's and Parkinson's diseases. We provide a description of a sequence-indepedent mechanism by which polypeptide chains aggregate by forming metastable oligomeric intermediate states prior to converting into fibrillar structures. Our results illustrate how the formation of ordered arrays of hydrogen bonds drives the formation of beta-sheets within the disordered oligomeric aggregates that form early under the effect of hydrophobic forces. Initially individual beta-sheets form with random orientations, which subsequently tend to align into protofilaments as their lengths increases. Our results suggest that amyloid aggregation represents an example of the Ostwald step rule of first order phase transitions by showing that ordered cross-beta structures emerge preferentially from disordered compact dynamical intermediate assemblies. |
2402.11657 | Marius Brusselmans | Marius Brusselmans, Luiz Max Carvalho, Samuel L. Hong, Jiansi Gao,
Frederick A. Matsen IV, Andrew Rambaut, Philippe Lemey, Marc A. Suchard,
Gytis Dudas, and Guy Baele | On the importance of assessing topological convergence in Bayesian
phylogenetic inference | null | null | null | null | q-bio.PE q-bio.GN q-bio.QM | http://creativecommons.org/licenses/by-sa/4.0/ | Modern phylogenetics research is often performed within a Bayesian framework,
using sampling algorithms such as Markov chain Monte Carlo (MCMC) to
approximate the posterior distribution. These algorithms require careful
evaluation of the quality of the generated samples. Within the field of
phylogenetics, one frequently adopted diagnostic approach is to evaluate the
effective sample size (ESS) and to investigate trace graphs of the sampled
parameters. A major limitation of these approaches is that they are developed
for continuous parameters and therefore incompatible with a crucial parameter
in these inferences: the tree topology. Several recent advancements have aimed
at extending these diagnostics to topological space. In this short reflection
paper, we present a case study illustrating how these topological diagnostics
can contain information not found in standard diagnostics, and how decisions
regarding which of these diagnostics to compute can impact inferences regarding
MCMC convergence and mixing. Given the major importance of detecting
convergence and mixing issues in Bayesian phylogenetic analyses, the lack of a
unified approach to this problem warrants further action, especially now that
additional tools are becoming available to researchers.
| [
{
"created": "Sun, 18 Feb 2024 17:28:15 GMT",
"version": "v1"
}
] | 2024-02-20 | [
[
"Brusselmans",
"Marius",
""
],
[
"Carvalho",
"Luiz Max",
""
],
[
"Hong",
"Samuel L.",
""
],
[
"Gao",
"Jiansi",
""
],
[
"Matsen",
"Frederick A.",
"IV"
],
[
"Rambaut",
"Andrew",
""
],
[
"Lemey",
"Philippe",
""
],
[
"Suchard",
"Marc A.",
""
],
[
"Dudas",
"Gytis",
""
],
[
"Baele",
"Guy",
""
]
] | Modern phylogenetics research is often performed within a Bayesian framework, using sampling algorithms such as Markov chain Monte Carlo (MCMC) to approximate the posterior distribution. These algorithms require careful evaluation of the quality of the generated samples. Within the field of phylogenetics, one frequently adopted diagnostic approach is to evaluate the effective sample size (ESS) and to investigate trace graphs of the sampled parameters. A major limitation of these approaches is that they are developed for continuous parameters and therefore incompatible with a crucial parameter in these inferences: the tree topology. Several recent advancements have aimed at extending these diagnostics to topological space. In this short reflection paper, we present a case study illustrating how these topological diagnostics can contain information not found in standard diagnostics, and how decisions regarding which of these diagnostics to compute can impact inferences regarding MCMC convergence and mixing. Given the major importance of detecting convergence and mixing issues in Bayesian phylogenetic analyses, the lack of a unified approach to this problem warrants further action, especially now that additional tools are becoming available to researchers. |
1407.3499 | Anjan Dasgupta Prof. | Sufi O Raja and Anjan Kr Dasgupta | Instant Response of Live HeLa Cells to Static Magnetic Field and Its
Magnetic Adaptation | 17 pages 7 figures | null | null | null | q-bio.CB physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We report Static Magnetic Field (SMF) induced altered sub-cellular streaming,
which retains even after withdrawal of the field. The observation is
statistically validated by differential fluorescence recovery after
photo-bleaching (FRAP) studies in presence and absence of SMF, recovery rate
being higher in presence of SMF. This instant magneto-sensing by live cells can
be explained by inherent diamagnetic susceptibility of cells and alternatively
by spin recombination, e.g., by the radical pair mechanism. These arguments are
however insufficient to explain the retention of the SMF effect even after
field withdrawal. Typically, a relaxation time scale at least of the order of
minutes is observed. This long duration of the SMF effect can be explained
postulating a field induced coherence that is followed by decoherence after the
field withdrawal. A related observation is the emergence of enhanced magnetic
susceptibility of cells after magnetic pre-incubation. This implies onset of a
new spin equilibrium state as a result of prolonged SMF incubation. Lastly,
translation of such altered spin states to a cellular signal that leads to an
altered sub-cellular streaming, probable intracellular machineries for this
translation being discussed in the text.
| [
{
"created": "Sun, 13 Jul 2014 19:02:08 GMT",
"version": "v1"
}
] | 2014-07-15 | [
[
"Raja",
"Sufi O",
""
],
[
"Dasgupta",
"Anjan Kr",
""
]
] | We report Static Magnetic Field (SMF) induced altered sub-cellular streaming, which retains even after withdrawal of the field. The observation is statistically validated by differential fluorescence recovery after photo-bleaching (FRAP) studies in presence and absence of SMF, recovery rate being higher in presence of SMF. This instant magneto-sensing by live cells can be explained by inherent diamagnetic susceptibility of cells and alternatively by spin recombination, e.g., by the radical pair mechanism. These arguments are however insufficient to explain the retention of the SMF effect even after field withdrawal. Typically, a relaxation time scale at least of the order of minutes is observed. This long duration of the SMF effect can be explained postulating a field induced coherence that is followed by decoherence after the field withdrawal. A related observation is the emergence of enhanced magnetic susceptibility of cells after magnetic pre-incubation. This implies onset of a new spin equilibrium state as a result of prolonged SMF incubation. Lastly, translation of such altered spin states to a cellular signal that leads to an altered sub-cellular streaming, probable intracellular machineries for this translation being discussed in the text. |
2211.05469 | Abhishek Senapati | Abhishek Senapati, Adam Mertel, Weronika Schlechte-Welnicz, Justin M.
Calabrese | Estimating cross-border mobility from the difference in peak-timing: A
case study in Poland-Germany border regions | Added Reference (Abdussalam et al. (2022) in Section 3 | null | null | null | q-bio.PE physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | Human mobility contributes to the fast spatio-temporal propagation of
infectious diseases. During an outbreak, monitoring the infection situation on
either side of an international border is very crucial as there is always a
higher risk of disease importation associated with cross-border migration.
Mechanistic models are effective tools to investigate the consequences of
cross-border mobility on disease dynamics and help in designing effective
control strategies. However, in practice, due to the unavailability of
cross-border mobility data, it becomes difficult to propose reliable,
model-based strategies. In this study, we propose a method for estimating
cross-border mobility flux between any pair of regions that share an
international border from the observed difference in the timing of the
infection peak in each region. Assuming the underlying disease dynamics is
governed by a Susceptible-Infected-Recovered (SIR) model, we employ stochastic
simulations to obtain the maximum likelihood cross-border mobility estimate for
any pair of regions where the difference in peak time can be measured. We then
investigate how the estimate of cross-border mobility flux varies depending on
the disease transmission rate, which is a key epidemiological parameter. We
further show that the uncertainty in mobility flux estimates decreases for
higher disease transmission rates and larger observed differences in peak
timing. Finally, as a case study, we apply the method to some selected regions
along the Poland-Germany border which are directly connected through multiple
modes of transportation and quantify the cross-border fluxes from the COVID-19
cases data during the period $20^{\rm th}$ February $2021$ to $20^{\rm th}$
June $2021$.
| [
{
"created": "Thu, 10 Nov 2022 10:29:20 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Nov 2022 11:06:32 GMT",
"version": "v2"
}
] | 2022-11-14 | [
[
"Senapati",
"Abhishek",
""
],
[
"Mertel",
"Adam",
""
],
[
"Schlechte-Welnicz",
"Weronika",
""
],
[
"Calabrese",
"Justin M.",
""
]
] | Human mobility contributes to the fast spatio-temporal propagation of infectious diseases. During an outbreak, monitoring the infection situation on either side of an international border is very crucial as there is always a higher risk of disease importation associated with cross-border migration. Mechanistic models are effective tools to investigate the consequences of cross-border mobility on disease dynamics and help in designing effective control strategies. However, in practice, due to the unavailability of cross-border mobility data, it becomes difficult to propose reliable, model-based strategies. In this study, we propose a method for estimating cross-border mobility flux between any pair of regions that share an international border from the observed difference in the timing of the infection peak in each region. Assuming the underlying disease dynamics is governed by a Susceptible-Infected-Recovered (SIR) model, we employ stochastic simulations to obtain the maximum likelihood cross-border mobility estimate for any pair of regions where the difference in peak time can be measured. We then investigate how the estimate of cross-border mobility flux varies depending on the disease transmission rate, which is a key epidemiological parameter. We further show that the uncertainty in mobility flux estimates decreases for higher disease transmission rates and larger observed differences in peak timing. Finally, as a case study, we apply the method to some selected regions along the Poland-Germany border which are directly connected through multiple modes of transportation and quantify the cross-border fluxes from the COVID-19 cases data during the period $20^{\rm th}$ February $2021$ to $20^{\rm th}$ June $2021$. |
q-bio/0701039 | Ryan Gutenkunst | Ryan N. Gutenkunst, Joshua J. Waterfall, Fergal P. Casey, Kevin S.
Brown, Christopher R. Myers and James P. Sethna | Universally Sloppy Parameter Sensitivities in Systems Biology | Submitted to PLoS Computational Biology. Supplementary Information
available in "Other Formats" bundle. Discussion slightly revised to add
historical context | PLoS Comput Biol 3(10):e189 (2007) | 10.1371/journal.pcbi.0030189 | null | q-bio.QM q-bio.MN | null | Quantitative computational models play an increasingly important role in
modern biology. Such models typically involve many free parameters, and
assigning their values is often a substantial obstacle to model development.
Directly measuring \emph{in vivo} biochemical parameters is difficult, and
collectively fitting them to other data often yields large parameter
uncertainties. Nevertheless, in earlier work we showed in a
growth-factor-signaling model that collective fitting could yield
well-constrained predictions, even when it left individual parameters very
poorly constrained. We also showed that the model had a `sloppy' spectrum of
parameter sensitivities, with eigenvalues roughly evenly distributed over many
decades. Here we use a collection of models from the literature to test whether
such sloppy spectra are common in systems biology. Strikingly, we find that
every model we examine has a sloppy spectrum of sensitivities. We also test
several consequences of this sloppiness for building predictive models. In
particular, sloppiness suggests that collective fits to even large amounts of
ideal time-series data will often leave many parameters poorly constrained.
Tests over our model collection are consistent with this suggestion. This
difficulty with collective fits may seem to argue for direct parameter
measurements, but sloppiness also implies that such measurements must be
formidably precise and complete to usefully constrain many model predictions.
We confirm this implication in our signaling model. Our results suggest that
sloppy sensitivity spectra are universal in systems biology models. The
prevalence of sloppiness highlights the power of collective fits and suggests
that modelers should focus on predictions rather than on parameters.
| [
{
"created": "Wed, 24 Jan 2007 19:12:58 GMT",
"version": "v1"
},
{
"created": "Tue, 29 May 2007 19:53:33 GMT",
"version": "v2"
},
{
"created": "Fri, 27 Jul 2007 18:43:41 GMT",
"version": "v3"
}
] | 2011-11-09 | [
[
"Gutenkunst",
"Ryan N.",
""
],
[
"Waterfall",
"Joshua J.",
""
],
[
"Casey",
"Fergal P.",
""
],
[
"Brown",
"Kevin S.",
""
],
[
"Myers",
"Christopher R.",
""
],
[
"Sethna",
"James P.",
""
]
] | Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring \emph{in vivo} biochemical parameters is difficult, and collectively fitting them to other data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a `sloppy' spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters. |
2309.10823 | Julian G\"oltz | Julian G\"oltz, Sebastian Billaudelle, Laura Kriener, Luca Blessing,
Christian Pehle, Eric M\"uller, Johannes Schemmel, Mihai A. Petrovici | Gradient-based methods for spiking physical systems | 2 page abstract, submitted to and accepted by the NNPC (International
conference on neuromorphic, natural and physical computing) | null | null | null | q-bio.NC cs.NE | http://creativecommons.org/licenses/by/4.0/ | Recent efforts have fostered significant progress towards deep learning in
spiking networks, both theoretical and in silico. Here, we discuss several
different approaches, including a tentative comparison of the results on
BrainScaleS-2, and hint towards future such comparative studies.
| [
{
"created": "Tue, 29 Aug 2023 15:47:19 GMT",
"version": "v1"
}
] | 2023-09-21 | [
[
"Göltz",
"Julian",
""
],
[
"Billaudelle",
"Sebastian",
""
],
[
"Kriener",
"Laura",
""
],
[
"Blessing",
"Luca",
""
],
[
"Pehle",
"Christian",
""
],
[
"Müller",
"Eric",
""
],
[
"Schemmel",
"Johannes",
""
],
[
"Petrovici",
"Mihai A.",
""
]
] | Recent efforts have fostered significant progress towards deep learning in spiking networks, both theoretical and in silico. Here, we discuss several different approaches, including a tentative comparison of the results on BrainScaleS-2, and hint towards future such comparative studies. |
1809.02956 | Amit Chattopadhyay | Ewa Grela, Michael Stich, Amit K Chattopadhyay | Epidemiological impact of waning immunization on a vaccinated population | Published version In EPJB has 11 pages (2-columned), 1 Table, 10
figures | null | 10.1140/epjb/e2018-90136-3 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is an epidemiological SIRV model based study that is designed to analyze
the impact of vaccination in containing infection spread, in a 4-tiered
population compartment comprised of susceptible, infected, recovered and
vaccinated agents. While many models assume a lifelong protection through
vaccination, we focus on the impact of waning immunization due to conversion of
vaccinated and recovered agents back to susceptible ones. Two asymptotic states
exist, the "disease-free equilibrium" and the "endemic equilibrium"; we express
the transitions between these states as function of the vaccination and
conversion rates using the basic reproduction number as a descriptor. We find
that the vaccination of newborns and adults have different consequences in
controlling epidemics. We also find that a decaying disease protection within
the recovered sub-population is not sufficient to trigger an epidemic at the
linear level. Our simulations focus on parameter sets that could model a
disease with waning immunization like pertussis. For a diffusively coupled
population, a transition to the endemic state can be initiated via the
propagation of a traveling infection wave, described successfully within a
Fisher-Kolmogorov framework.
| [
{
"created": "Sun, 9 Sep 2018 11:22:51 GMT",
"version": "v1"
}
] | 2018-11-14 | [
[
"Grela",
"Ewa",
""
],
[
"Stich",
"Michael",
""
],
[
"Chattopadhyay",
"Amit K",
""
]
] | This is an epidemiological SIRV model based study that is designed to analyze the impact of vaccination in containing infection spread, in a 4-tiered population compartment comprised of susceptible, infected, recovered and vaccinated agents. While many models assume a lifelong protection through vaccination, we focus on the impact of waning immunization due to conversion of vaccinated and recovered agents back to susceptible ones. Two asymptotic states exist, the "disease-free equilibrium" and the "endemic equilibrium"; we express the transitions between these states as function of the vaccination and conversion rates using the basic reproduction number as a descriptor. We find that the vaccination of newborns and adults have different consequences in controlling epidemics. We also find that a decaying disease protection within the recovered sub-population is not sufficient to trigger an epidemic at the linear level. Our simulations focus on parameter sets that could model a disease with waning immunization like pertussis. For a diffusively coupled population, a transition to the endemic state can be initiated via the propagation of a traveling infection wave, described successfully within a Fisher-Kolmogorov framework. |
2107.08530 | Christopher Stock | Christopher H. Stock, Sarah E. Harvey, Samuel A. Ocko, Surya Ganguli | Synaptic balancing: a biologically plausible local learning rule that
provably increases neural network noise robustness without sacrificing task
performance | null | null | 10.1371/journal.pcbi.1010418 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel, biologically plausible local learning rule that
provably increases the robustness of neural dynamics to noise in nonlinear
recurrent neural networks with homogeneous nonlinearities. Our learning rule
achieves higher noise robustness without sacrificing performance on the task
and without requiring any knowledge of the particular task. The plasticity
dynamics -- an integrable dynamical system operating on the weights of the
network -- maintains a multiplicity of conserved quantities, most notably the
network's entire temporal map of input to output trajectories. The outcome of
our learning rule is a synaptic balancing between the incoming and outgoing
synapses of every neuron. This synaptic balancing rule is consistent with many
known aspects of experimentally observed heterosynaptic plasticity, and
moreover makes new experimentally testable predictions relating plasticity at
the incoming and outgoing synapses of individual neurons. Overall, this work
provides a novel, practical local learning rule that exactly preserves overall
network function and, in doing so, provides new conceptual bridges between the
disparate worlds of the neurobiology of heterosynaptic plasticity, the
engineering of regularized noise-robust networks, and the mathematics of
integrable Lax dynamical systems.
| [
{
"created": "Sun, 18 Jul 2021 20:15:43 GMT",
"version": "v1"
}
] | 2022-10-12 | [
[
"Stock",
"Christopher H.",
""
],
[
"Harvey",
"Sarah E.",
""
],
[
"Ocko",
"Samuel A.",
""
],
[
"Ganguli",
"Surya",
""
]
] | We introduce a novel, biologically plausible local learning rule that provably increases the robustness of neural dynamics to noise in nonlinear recurrent neural networks with homogeneous nonlinearities. Our learning rule achieves higher noise robustness without sacrificing performance on the task and without requiring any knowledge of the particular task. The plasticity dynamics -- an integrable dynamical system operating on the weights of the network -- maintains a multiplicity of conserved quantities, most notably the network's entire temporal map of input to output trajectories. The outcome of our learning rule is a synaptic balancing between the incoming and outgoing synapses of every neuron. This synaptic balancing rule is consistent with many known aspects of experimentally observed heterosynaptic plasticity, and moreover makes new experimentally testable predictions relating plasticity at the incoming and outgoing synapses of individual neurons. Overall, this work provides a novel, practical local learning rule that exactly preserves overall network function and, in doing so, provides new conceptual bridges between the disparate worlds of the neurobiology of heterosynaptic plasticity, the engineering of regularized noise-robust networks, and the mathematics of integrable Lax dynamical systems. |
1112.5026 | Gasper Tkacik | Ga\v{s}per Tka\v{c}ik, Aleksandra M Walczak, William Bialek | Optimizing information flow in small genetic networks. III. A
self-interacting gene | 18 pages, 9 figures | Phys Rev E 85 (2012): 041903 | 10.1103/PhysRevE.85.041903 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Living cells must control the reading out or "expression" of information
encoded in their genomes, and this regulation often is mediated by
transcription factors--proteins that bind to DNA and either enhance or repress
the expression of nearby genes. But the expression of transcription factor
proteins is itself regulated, and many transcription factors regulate their own
expression in addition to responding to other input signals. Here we analyze
the simplest of such self-regulatory circuits, asking how parameters can be
chosen to optimize information transmission from inputs to outputs in the
steady state. Some nonzero level of self-regulation is almost always optimal,
with self-activation dominant when transcription factor concentrations are low
and self-repression dominant when concentrations are high. In steady state the
optimal self-activation is never strong enough to induce bistability, although
there is a limit in which the optimal parameters are very close to the critical
point.
| [
{
"created": "Wed, 21 Dec 2011 14:14:30 GMT",
"version": "v1"
}
] | 2013-08-01 | [
[
"Tkačik",
"Gašper",
""
],
[
"Walczak",
"Aleksandra M",
""
],
[
"Bialek",
"William",
""
]
] | Living cells must control the reading out or "expression" of information encoded in their genomes, and this regulation often is mediated by transcription factors--proteins that bind to DNA and either enhance or repress the expression of nearby genes. But the expression of transcription factor proteins is itself regulated, and many transcription factors regulate their own expression in addition to responding to other input signals. Here we analyze the simplest of such self-regulatory circuits, asking how parameters can be chosen to optimize information transmission from inputs to outputs in the steady state. Some nonzero level of self-regulation is almost always optimal, with self-activation dominant when transcription factor concentrations are low and self-repression dominant when concentrations are high. In steady state the optimal self-activation is never strong enough to induce bistability, although there is a limit in which the optimal parameters are very close to the critical point. |
2303.16361 | Thomas Parmer | Thomas Parmer, Luis M. Rocha | Dynamical Modularity in Automata Models of Biochemical Networks | 42 pages, 7 figures; updated author information | null | null | null | q-bio.MN cs.CE | http://creativecommons.org/licenses/by/4.0/ | Given the large size and complexity of most biochemical regulation and
signaling networks, there is a non-trivial relationship between the micro-level
logic of component interactions and the observed macro-dynamics. Here we
address this issue by formalizing the existing concept of pathway modules,
which are sequences of state updates that are guaranteed to occur (barring
outside interference) in the dynamics of automata networks after the
perturbation of a subset of driver nodes. We present a novel algorithm to
automatically extract pathway modules from networks and we characterize the
interactions that may take place between modules. This methodology uses only
the causal logic of individual node variables (micro-dynamics) without the need
to compute the dynamical landscape of the networks (macro-dynamics).
Specifically, we identify complex modules, which maximize pathway length and
require synergy between their components. This allows us to propose a new take
on dynamical modularity that partitions complex networks into causal pathways
of variables that are guaranteed to transition to specific states given a
perturbation to a set of driver nodes. Thus, the same node variable can take
part in distinct modules depending on the state it takes. Our measure of
dynamical modularity of a network is then inversely proportional to the overlap
among complex modules and maximal when complex modules are completely
decouplable from one another in the network dynamics. We estimate dynamical
modularity for several genetic regulatory networks, including the Drosophila
melanogaster segment-polarity network. We discuss how identifying complex
modules and the dynamical modularity portrait of networks explains the
macro-dynamics of biological networks, such as uncovering the (more or less)
decouplable building blocks of emergent computation (or collective behavior) in
biochemical regulation and signaling.
| [
{
"created": "Wed, 29 Mar 2023 00:01:30 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Apr 2023 22:41:08 GMT",
"version": "v2"
}
] | 2023-04-19 | [
[
"Parmer",
"Thomas",
""
],
[
"Rocha",
"Luis M.",
""
]
] | Given the large size and complexity of most biochemical regulation and signaling networks, there is a non-trivial relationship between the micro-level logic of component interactions and the observed macro-dynamics. Here we address this issue by formalizing the existing concept of pathway modules, which are sequences of state updates that are guaranteed to occur (barring outside interference) in the dynamics of automata networks after the perturbation of a subset of driver nodes. We present a novel algorithm to automatically extract pathway modules from networks and we characterize the interactions that may take place between modules. This methodology uses only the causal logic of individual node variables (micro-dynamics) without the need to compute the dynamical landscape of the networks (macro-dynamics). Specifically, we identify complex modules, which maximize pathway length and require synergy between their components. This allows us to propose a new take on dynamical modularity that partitions complex networks into causal pathways of variables that are guaranteed to transition to specific states given a perturbation to a set of driver nodes. Thus, the same node variable can take part in distinct modules depending on the state it takes. Our measure of dynamical modularity of a network is then inversely proportional to the overlap among complex modules and maximal when complex modules are completely decouplable from one another in the network dynamics. We estimate dynamical modularity for several genetic regulatory networks, including the Drosophila melanogaster segment-polarity network. We discuss how identifying complex modules and the dynamical modularity portrait of networks explains the macro-dynamics of biological networks, such as uncovering the (more or less) decouplable building blocks of emergent computation (or collective behavior) in biochemical regulation and signaling. |
2403.14202 | Hong-Li Zeng | Hong-Li Zeng, Cheng-Long Yang, Bo Jing, John Barton, Erik Aurell | Two fitness inference schemes compared using allele frequencies from
1,068,391 sequences sampled in the UK during the COVID-19 pandemic | 10 pages, 6 figures | null | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Throughout the course of the SARS-CoV-2 pandemic, genetic variation has
contributed to the spread and persistence of the virus. For example, various
mutations have allowed SARS-CoV-2 to escape antibody neutralization or to bind
more strongly to the receptors that it uses to enter human cells. Here, we
compared two methods that estimate the fitness effects of viral mutations using
the abundant sequence data gathered over the course of the pandemic. Both
approaches are grounded in population genetics theory but with different
assumptions. One approach, tQLE, features an epistatic fitness landscape and
assumes that alleles are nearly in linkage equilibrium. Another approach, MPL,
assumes a simple, additive fitness landscape, but allows for any level of
correlation between alleles. We characterized differences in the distributions
of fitness values inferred by each approach and in the ranks of fitness values
that they assign to sequences across time. We find that in a large fraction of
weeks the two methods are in good agreement as to their top-ranked sequences,
i.e., as to which sequences observed that week are most fit. We also find that
agreement between ranking of sequences varies with genetic unimodality in the
population in a given week.
| [
{
"created": "Thu, 21 Mar 2024 07:54:11 GMT",
"version": "v1"
}
] | 2024-03-22 | [
[
"Zeng",
"Hong-Li",
""
],
[
"Yang",
"Cheng-Long",
""
],
[
"Jing",
"Bo",
""
],
[
"Barton",
"John",
""
],
[
"Aurell",
"Erik",
""
]
] | Throughout the course of the SARS-CoV-2 pandemic, genetic variation has contributed to the spread and persistence of the virus. For example, various mutations have allowed SARS-CoV-2 to escape antibody neutralization or to bind more strongly to the receptors that it uses to enter human cells. Here, we compared two methods that estimate the fitness effects of viral mutations using the abundant sequence data gathered over the course of the pandemic. Both approaches are grounded in population genetics theory but with different assumptions. One approach, tQLE, features an epistatic fitness landscape and assumes that alleles are nearly in linkage equilibrium. Another approach, MPL, assumes a simple, additive fitness landscape, but allows for any level of correlation between alleles. We characterized differences in the distributions of fitness values inferred by each approach and in the ranks of fitness values that they assign to sequences across time. We find that in a large fraction of weeks the two methods are in good agreement as to their top-ranked sequences, i.e., as to which sequences observed that week are most fit. We also find that agreement between ranking of sequences varies with genetic unimodality in the population in a given week. |
1911.00716 | Martin Vasilev | Martin R. Vasilev, Victoria I. Adedeji, Calvin Laursen, Marcin Budka,
Timothy J. Slattery | Do readers use character information when programming return-sweep
saccades? | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Reading saccades that occur within a single line of text are guided by the
size of letters. However, readers occasionally need to make longer saccades
(known as return-sweeps) that take their eyes from the end of one line of text
to the beginning of the next. In this study, we tested whether return-sweep
saccades are also guided by font size information and whether this guidance
depends on visual acuity of the return-sweep target area. To do this, we
manipulated the font size of letters (0.29 vs 0.39 deg. per character) and the
length of the first line of text (16 vs 26 deg.). The larger font resulted in
return-sweeps that landed further to the right of the line start and in a
reduction of under-sweeps compared to the smaller font. This suggests that font
size information is used when programming return-sweeps. Return-sweeps in the
longer line condition landed further to the right of the line start and the
proportion of under-sweeps increased compared to the short line condition. This
likely reflects an increase in saccadic undershoot error with the increase in
intended saccade size. Critically, there was no interaction between font size
and line length. This suggests that when programming return-sweeps, the use of
font size information does not depend on visual acuity at the saccade target.
Instead, it appears that readers rely on global typographic properties of the
text in order to maintain an optimal number of characters to the left of their
first fixation on a new line.
| [
{
"created": "Sat, 2 Nov 2019 13:48:12 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Jul 2020 10:23:13 GMT",
"version": "v2"
},
{
"created": "Tue, 5 Jan 2021 19:24:20 GMT",
"version": "v3"
}
] | 2021-01-07 | [
[
"Vasilev",
"Martin R.",
""
],
[
"Adedeji",
"Victoria I.",
""
],
[
"Laursen",
"Calvin",
""
],
[
"Budka",
"Marcin",
""
],
[
"Slattery",
"Timothy J.",
""
]
] | Reading saccades that occur within a single line of text are guided by the size of letters. However, readers occasionally need to make longer saccades (known as return-sweeps) that take their eyes from the end of one line of text to the beginning of the next. In this study, we tested whether return-sweep saccades are also guided by font size information and whether this guidance depends on visual acuity of the return-sweep target area. To do this, we manipulated the font size of letters (0.29 vs 0.39 deg. per character) and the length of the first line of text (16 vs 26 deg.). The larger font resulted in return-sweeps that landed further to the right of the line start and in a reduction of under-sweeps compared to the smaller font. This suggests that font size information is used when programming return-sweeps. Return-sweeps in the longer line condition landed further to the right of the line start and the proportion of under-sweeps increased compared to the short line condition. This likely reflects an increase in saccadic undershoot error with the increase in intended saccade size. Critically, there was no interaction between font size and line length. This suggests that when programming return-sweeps, the use of font size information does not depend on visual acuity at the saccade target. Instead, it appears that readers rely on global typographic properties of the text in order to maintain an optimal number of characters to the left of their first fixation on a new line. |
2306.03159 | Julia Berezutskaya | Evan Canny, Mariska J. Vansteensel, Sandra M.A. van der Salm, Gernot
R. M\"uller-Putz, Julia Berezutskaya | The feasibility of combining communication BCIs with FES for individuals
with locked-in syndrome | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Individuals with a locked-in state live with severe whole-body paralysis that
limits their ability to communicate with family and loved ones. Recent advances
in the brain-computer interface (BCI) technology have presented a potential
alternative for these people to communicate by detecting neural activity
associated with attempted hand or speech movements and translating the decoded
intended movements to a control signal for a computer. A technique that could
potentially enrich the communication capacity of BCIs is functional electrical
stimulation (FES) of paralyzed limbs and face to restore body and facial
movements of paralyzed individuals, allowing to add body language and facial
expression to communication BCI utterances. Here, we review the current state
of the art of existing BCI and FES work in people with paralysis of body and
face and propose that a combined BCI-FES approach, which has already proved
successful in several applications in stroke and spinal cord injury, can
provide a novel promising mode of communication for locked-in individuals.
| [
{
"created": "Mon, 5 Jun 2023 18:12:31 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Jul 2023 09:48:16 GMT",
"version": "v2"
}
] | 2023-07-18 | [
[
"Canny",
"Evan",
""
],
[
"Vansteensel",
"Mariska J.",
""
],
[
"van der Salm",
"Sandra M. A.",
""
],
[
"Müller-Putz",
"Gernot R.",
""
],
[
"Berezutskaya",
"Julia",
""
]
] | Individuals with a locked-in state live with severe whole-body paralysis that limits their ability to communicate with family and loved ones. Recent advances in the brain-computer interface (BCI) technology have presented a potential alternative for these people to communicate by detecting neural activity associated with attempted hand or speech movements and translating the decoded intended movements to a control signal for a computer. A technique that could potentially enrich the communication capacity of BCIs is functional electrical stimulation (FES) of paralyzed limbs and face to restore body and facial movements of paralyzed individuals, allowing to add body language and facial expression to communication BCI utterances. Here, we review the current state of the art of existing BCI and FES work in people with paralysis of body and face and propose that a combined BCI-FES approach, which has already proved successful in several applications in stroke and spinal cord injury, can provide a novel promising mode of communication for locked-in individuals. |
1402.1805 | Jonathan Potts | Jonathan R. Potts, Marie Auger-M\'eth\'e, Karl Mokross, Mark A. Lewis | A generalized residual technique for analyzing complex movement models
using earth mover's distance | null | Methods in Ecology and Evolution (2014) 5:1012-1022 | 10.1111/2041-210X.12253 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 1. Complex systems of moving and interacting objects are ubiquitous in the
natural and social sciences. Predicting their behavior often requires models
that mimic these systems with sufficient accuracy, while accounting for their
inherent stochasticity. Though tools exist to determine which of a set of
candidate models is best relative to the others, there is currently no generic
goodness-of-fit framework for testing how close the best model is to the real
complex stochastic system.
2. We propose such a framework, using a novel application of the Earth
mover's distance, also known as the Wasserstein metric. It is applicable to any
stochastic process where the probability of the model's state at time $t$ is a
function of the state at previous times. It generalizes the concept of a
residual, often used to analyze 1D summary statistics, to situations where the
complexity of the underlying model's probability distribution makes standard
residual analysis too imprecise for practical use.
3. We give a scheme for testing the hypothesis that a model is an accurate
description of a data set. We demonstrate the tractability and usefulness of
our approach by application to animal movement models in complex, heterogeneous
environments. We detail methods for visualizing results and extracting a
variety of information on a given model's quality, such as whether there is any
inherent bias in the model, or in which situations it is most accurate. We
demonstrate our techniques by application to data on multi-species flocks of
insectivore birds in the Amazon rainforest.
4. This work provides a usable toolkit to assess the quality of generic
movement models of complex systems, in an absolute rather than a relative
sense.
| [
{
"created": "Sat, 8 Feb 2014 00:55:29 GMT",
"version": "v1"
},
{
"created": "Tue, 13 May 2014 14:29:22 GMT",
"version": "v2"
},
{
"created": "Thu, 28 Aug 2014 21:29:59 GMT",
"version": "v3"
}
] | 2014-12-02 | [
[
"Potts",
"Jonathan R.",
""
],
[
"Auger-Méthé",
"Marie",
""
],
[
"Mokross",
"Karl",
""
],
[
"Lewis",
"Mark A.",
""
]
] | 1. Complex systems of moving and interacting objects are ubiquitous in the natural and social sciences. Predicting their behavior often requires models that mimic these systems with sufficient accuracy, while accounting for their inherent stochasticity. Though tools exist to determine which of a set of candidate models is best relative to the others, there is currently no generic goodness-of-fit framework for testing how close the best model is to the real complex stochastic system. 2. We propose such a framework, using a novel application of the Earth mover's distance, also known as the Wasserstein metric. It is applicable to any stochastic process where the probability of the model's state at time $t$ is a function of the state at previous times. It generalizes the concept of a residual, often used to analyze 1D summary statistics, to situations where the complexity of the underlying model's probability distribution makes standard residual analysis too imprecise for practical use. 3. We give a scheme for testing the hypothesis that a model is an accurate description of a data set. We demonstrate the tractability and usefulness of our approach by application to animal movement models in complex, heterogeneous environments. We detail methods for visualizing results and extracting a variety of information on a given model's quality, such as whether there is any inherent bias in the model, or in which situations it is most accurate. We demonstrate our techniques by application to data on multi-species flocks of insectivore birds in the Amazon rainforest. 4. This work provides a usable toolkit to assess the quality of generic movement models of complex systems, in an absolute rather than a relative sense. |
2307.02171 | K. Anton Feenstra | Jose Gavald\'a-Garci\'a, Bas Stringer, Olga Ivanova, Sanne Abeln, K.
Anton Feenstra, Halima Mouhib | Data Resources for Structural Bioinformatics | editorial responsability: Sanne Abeln, K. Anton Feenstra, Halima
Mouhib. This chapter is part of the book "Introduction to Protein Structural
Bioinformatics". The Preface arXiv:1801.09442 contains links to all the
(published) chapters. The update adds available arxiv hyperlinks for the
chapters | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | While many good textbooks are available on Protein Structure, Molecular
Simulations, Thermodynamics and Bioinformatics methods in general, there is no
good introductory level book for the field of Structural Bioinformatics. This
book aims to give an introduction into Structural Bioinformatics, which is
where the previous topics meet to explore three dimensional protein structures
through computational analysis. We provide an overview of existing
computational techniques, to validate, simulate, predict and analyse protein
structures. More importantly, it will aim to provide practical knowledge about
how and when to use such techniques. We will consider proteins from three major
vantage points: Protein structure quantification, Protein structure prediction,
and Protein simulation & dynamics.
Structural bioinformatics involves a variety of computational methods, all of
which require input data. Typical inputs include protein structures and
sequences, which are usually retrieved from a public or private database. This
chapter introduces several key resources that make such data available, as well
as a handful of tools that derive additional information from experimentally
determined or computationally predicted protein structures and sequences.
| [
{
"created": "Wed, 5 Jul 2023 10:12:59 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Jul 2023 18:07:07 GMT",
"version": "v2"
}
] | 2023-07-10 | [
[
"Gavaldá-Garciá",
"Jose",
""
],
[
"Stringer",
"Bas",
""
],
[
"Ivanova",
"Olga",
""
],
[
"Abeln",
"Sanne",
""
],
[
"Feenstra",
"K. Anton",
""
],
[
"Mouhib",
"Halima",
""
]
] | While many good textbooks are available on Protein Structure, Molecular Simulations, Thermodynamics and Bioinformatics methods in general, there is no good introductory level book for the field of Structural Bioinformatics. This book aims to give an introduction into Structural Bioinformatics, which is where the previous topics meet to explore three dimensional protein structures through computational analysis. We provide an overview of existing computational techniques, to validate, simulate, predict and analyse protein structures. More importantly, it will aim to provide practical knowledge about how and when to use such techniques. We will consider proteins from three major vantage points: Protein structure quantification, Protein structure prediction, and Protein simulation & dynamics. Structural bioinformatics involves a variety of computational methods, all of which require input data. Typical inputs include protein structures and sequences, which are usually retrieved from a public or private database. This chapter introduces several key resources that make such data available, as well as a handful of tools that derive additional information from experimentally determined or computationally predicted protein structures and sequences. |
1805.02809 | Daqing Guo | Yangsong Zhang, Erwei Yin, Fali Li, Yu Zhang, Toshihisa Tanaka, Qibin
Zhao, Yan Cui, Peng Xu, Dezhong Yao, Daqing Guo | Two-stage frequency recognition method based on correlated component
analysis for SSVEP-based BCI | 10 pages, 10 figures, submitted to IEEE TNSRE | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Canonical correlation analysis (CCA) is a state-of-the-art method for
frequency recognition in steady-state visual evoked potential (SSVEP)-based
brain-computer interface (BCI) systems. Various extended methods have been
developed, and among such methods, a combination method of CCA and
individual-template-based CCA (IT-CCA) has achieved excellent performance.
However, CCA requires the canonical vectors to be orthogonal, which may not be
a reasonable assumption for EEG analysis. In the current study, we propose
using the correlated component analysis (CORRCA) rather than CCA to implement
frequency recognition. CORRCA can relax the constraint of canonical vectors in
CCA, and generate the same projection vector for two multichannel EEG signals.
Furthermore, we propose a two-stage method based on the basic CORRCA method
(termed TSCORRCA). Evaluated on a benchmark dataset of thirty-five subjects,
the experimental results demonstrate that CORRCA significantly outperformed
CCA, and TSCORRCA obtained the best performance among the compared methods.
This study demonstrates that CORRCA-based methods have great potential for
implementing high-performance SSVEP-based BCI systems.
| [
{
"created": "Tue, 8 May 2018 02:50:17 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Jun 2018 09:56:15 GMT",
"version": "v2"
},
{
"created": "Sun, 1 Jul 2018 11:53:06 GMT",
"version": "v3"
}
] | 2018-07-03 | [
[
"Zhang",
"Yangsong",
""
],
[
"Yin",
"Erwei",
""
],
[
"Li",
"Fali",
""
],
[
"Zhang",
"Yu",
""
],
[
"Tanaka",
"Toshihisa",
""
],
[
"Zhao",
"Qibin",
""
],
[
"Cui",
"Yan",
""
],
[
"Xu",
"Peng",
""
],
[
"Yao",
"Dezhong",
""
],
[
"Guo",
"Daqing",
""
]
] | Canonical correlation analysis (CCA) is a state-of-the-art method for frequency recognition in steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) systems. Various extended methods have been developed, and among such methods, a combination method of CCA and individual-template-based CCA (IT-CCA) has achieved excellent performance. However, CCA requires the canonical vectors to be orthogonal, which may not be a reasonable assumption for EEG analysis. In the current study, we propose using the correlated component analysis (CORRCA) rather than CCA to implement frequency recognition. CORRCA can relax the constraint of canonical vectors in CCA, and generate the same projection vector for two multichannel EEG signals. Furthermore, we propose a two-stage method based on the basic CORRCA method (termed TSCORRCA). Evaluated on a benchmark dataset of thirty-five subjects, the experimental results demonstrate that CORRCA significantly outperformed CCA, and TSCORRCA obtained the best performance among the compared methods. This study demonstrates that CORRCA-based methods have great potential for implementing high-performance SSVEP-based BCI systems. |
1501.01603 | Trilochan Bagarti | Trilochan Bagarti | Population extinction in an inhomogeneous host-pathogen model | Errors in the text are fixed, Fig.5 and 6 are corrected. 13 pages, 6
figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study inhomogeneous host-pathogen dynamics to model the global amphibian
population extinction in a lake basin system. The lake basin system is modeled
as quenched disorder. In this model we show that once the pathogen arrives at
the lake basin it spreads from one lake to another, eventually spreading to the
entire lake basin system in a wave like pattern. The extinction time has been
found to depend on the steady state host population and pathogen growth rate.
Linear estimate of the extinction time is computed. The steady state host
population shows a threshold behavior in the interaction strength for a given
growth rate.
| [
{
"created": "Thu, 18 Dec 2014 19:24:27 GMT",
"version": "v1"
},
{
"created": "Sun, 1 Feb 2015 23:21:50 GMT",
"version": "v2"
}
] | 2015-02-03 | [
[
"Bagarti",
"Trilochan",
""
]
] | We study inhomogeneous host-pathogen dynamics to model the global amphibian population extinction in a lake basin system. The lake basin system is modeled as quenched disorder. In this model we show that once the pathogen arrives at the lake basin it spreads from one lake to another, eventually spreading to the entire lake basin system in a wave like pattern. The extinction time has been found to depend on the steady state host population and pathogen growth rate. Linear estimate of the extinction time is computed. The steady state host population shows a threshold behavior in the interaction strength for a given growth rate. |
1104.5583 | Sarada Seetharaman | Kavita Jain and Sarada Seetharaman | Multiple adaptive substitutions during evolution in novel environments | null | Genetics 189, no. 3 1029-1043 (2011) | null | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider an asexual population under strong selection-weak mutation
conditions evolving on rugged fitness landscapes with many local fitness peaks.
Unlike the previous studies in which the initial fitness of the population is
assumed to be high, here we start the adaptation process with a low fitness
corresponding to a population in a stressful novel environment. For generic
fitness distributions, using an analytic argument we find that the average
number of steps to a local optimum varies logarithmically with the genotype
sequence length and increases as the correlations amongst genotypic fitnesses
increase. When the fitnesses are exponentially or uniformly distributed, using
an evolution equation for the distribution of population fitness, we
analytically calculate the fitness distribution of fixed beneficial mutations
and the walk length distribution.
| [
{
"created": "Fri, 29 Apr 2011 09:45:18 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Sep 2011 11:50:52 GMT",
"version": "v2"
}
] | 2011-11-18 | [
[
"Jain",
"Kavita",
""
],
[
"Seetharaman",
"Sarada",
""
]
] | We consider an asexual population under strong selection-weak mutation conditions evolving on rugged fitness landscapes with many local fitness peaks. Unlike the previous studies in which the initial fitness of the population is assumed to be high, here we start the adaptation process with a low fitness corresponding to a population in a stressful novel environment. For generic fitness distributions, using an analytic argument we find that the average number of steps to a local optimum varies logarithmically with the genotype sequence length and increases as the correlations amongst genotypic fitnesses increase. When the fitnesses are exponentially or uniformly distributed, using an evolution equation for the distribution of population fitness, we analytically calculate the fitness distribution of fixed beneficial mutations and the walk length distribution. |
2101.02865 | Julio Augusto Freyre-Gonz\'alez | Juan M. Escorcia-Rodr\'iguez, Andreas Tauch, and Julio A.
Freyre-Gonz\'alez | Corynebacterium glutamicum regulation beyond transcription: Organizing
principles and reconstruction of an extended regulatory network incorporating
regulations mediated by small RNA and protein-protein interactions | 32 pages, 4 figures, 1 supplementary material | null | null | null | q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | Corynebacterium glutamicum is a Gram-positive bacterium found in soil where
the condition changes demand plasticity of the regulatory machinery. The study
of such machinery at the global scale has been challenged by the lack of data
integration. Here, we report three regulatory network models for C. glutamicum:
strong (3040 interactions) constructed solely with regulations previously
supported by directed experiments; all evidence (4665 interactions) containing
the strong network, regulations previously supported by non-directed
experiments, and protein-protein interactions with a direct effect on gene
transcription; and sRNA (5222 interactions) containing the all evidence network
and sRNA-mediated regulations. Compared to the previous version (2018), the
strong and all evidence networks increased by 75 and 1225 interactions,
respectively. We analyzed the system-level components of the three networks to
identify how they differ and compared their structures against those for the
networks of more than 40 species. The inclusion of the sRNAs regulations
changed the proportions of the system-level components and increased the number
of modules but decreased their size. The C. glutamicum regulatory structure
contrasted with other bacterial regulatory networks. Finally, we used the
strong networks of three model organisms to provide insights and future
directions of the C. glutamicum regulatory network characterization.
| [
{
"created": "Fri, 8 Jan 2021 06:03:26 GMT",
"version": "v1"
}
] | 2021-01-11 | [
[
"Escorcia-Rodríguez",
"Juan M.",
""
],
[
"Tauch",
"Andreas",
""
],
[
"Freyre-González",
"Julio A.",
""
]
] | Corynebacterium glutamicum is a Gram-positive bacterium found in soil where the condition changes demand plasticity of the regulatory machinery. The study of such machinery at the global scale has been challenged by the lack of data integration. Here, we report three regulatory network models for C. glutamicum: strong (3040 interactions) constructed solely with regulations previously supported by directed experiments; all evidence (4665 interactions) containing the strong network, regulations previously supported by non-directed experiments, and protein-protein interactions with a direct effect on gene transcription; and sRNA (5222 interactions) containing the all evidence network and sRNA-mediated regulations. Compared to the previous version (2018), the strong and all evidence networks increased by 75 and 1225 interactions, respectively. We analyzed the system-level components of the three networks to identify how they differ and compared their structures against those for the networks of more than 40 species. The inclusion of the sRNAs regulations changed the proportions of the system-level components and increased the number of modules but decreased their size. The C. glutamicum regulatory structure contrasted with other bacterial regulatory networks. Finally, we used the strong networks of three model organisms to provide insights and future directions of the C. glutamicum regulatory network characterization. |
1511.02079 | Salva Duran-Nebreda | Salva Duran-Nebreda, Adriano Bonforti, Raul Monta\~nez, Sergi Valverde
and Ricard Sol\'e | Emergence of proto-organisms from bistable stochastic differentiation
and adhesion | 9 pages, 4 figures | null | null | null | q-bio.PE q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rise of multicellularity in the early evolution of life represents a
major challenge for evolutionary biology. Guidance for finding answers has
emerged from disparate fields, from phylogenetics to modelling and synthetic
biology, but little is known about the potential origins of multicellular
aggregates before genetic programs took full control of developmental
processes. Such aggregates should involve spatial organisation of
differentiated cells and the modification of flows and concentrations of
metabolites within well defined boundaries. Here we show that, in an
environment where limited nutrients and toxic metabolites are introduced, a
population of cells capable of stochastic differentiation and differential
adhesion can develop into multicellular aggregates with a complex internal
structure. The morphospace of possible patterns is shown to be very rich,
including proto-organisms that display a high degree of organisational
complexity, far beyond simple heterogeneous populations of cells. Our findings
reveal that there is a potentially enormous richness of organismal complexity
between simple mixed cooperators and embodied living organisms.
| [
{
"created": "Fri, 6 Nov 2015 13:59:03 GMT",
"version": "v1"
}
] | 2015-11-09 | [
[
"Duran-Nebreda",
"Salva",
""
],
[
"Bonforti",
"Adriano",
""
],
[
"Montañez",
"Raul",
""
],
[
"Valverde",
"Sergi",
""
],
[
"Solé",
"Ricard",
""
]
] | The rise of multicellularity in the early evolution of life represents a major challenge for evolutionary biology. Guidance for finding answers has emerged from disparate fields, from phylogenetics to modelling and synthetic biology, but little is known about the potential origins of multicellular aggregates before genetic programs took full control of developmental processes. Such aggregates should involve spatial organisation of differentiated cells and the modification of flows and concentrations of metabolites within well defined boundaries. Here we show that, in an environment where limited nutrients and toxic metabolites are introduced, a population of cells capable of stochastic differentiation and differential adhesion can develop into multicellular aggregates with a complex internal structure. The morphospace of possible patterns is shown to be very rich, including proto-organisms that display a high degree of organisational complexity, far beyond simple heterogeneous populations of cells. Our findings reveal that there is a potentially enormous richness of organismal complexity between simple mixed cooperators and embodied living organisms. |
2207.07912 | Eric Wong | Eric C. Wong | A Reservoir Model of Explicit Human Intelligence | 8 pages | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | A fundamental feature of human intelligence is that we accumulate and
transfer knowledge as a society and across generations. We describe here a
network architecture for the human brain that may support this feature and
suggest that two key innovations were the ability to consider an offline model
of the world, and the use of language to record and communicate knowledge
within this model. We propose that these two innovations, together with
pre-existing mechanisms for associative learning, allowed us to develop a
conceptually simple associative network that operates like a reservoir of
attractors and can learn in a rapid, flexible, and robust manner. We
hypothesize that explicit human intelligence is based primarily on this type of
network, which works in conjunction with older and likely more complex deep
networks that perform sensory, motor, and other implicit forms of processing.
| [
{
"created": "Wed, 6 Jul 2022 04:08:58 GMT",
"version": "v1"
}
] | 2022-07-19 | [
[
"Wong",
"Eric C.",
""
]
] | A fundamental feature of human intelligence is that we accumulate and transfer knowledge as a society and across generations. We describe here a network architecture for the human brain that may support this feature and suggest that two key innovations were the ability to consider an offline model of the world, and the use of language to record and communicate knowledge within this model. We propose that these two innovations, together with pre-existing mechanisms for associative learning, allowed us to develop a conceptually simple associative network that operates like a reservoir of attractors and can learn in a rapid, flexible, and robust manner. We hypothesize that explicit human intelligence is based primarily on this type of network, which works in conjunction with older and likely more complex deep networks that perform sensory, motor, and other implicit forms of processing. |
1805.05322 | Konstantin Blyuss | F. Fatehi Chenar, Y.N. Kyrychko, K.B. Blyuss | Mathematical model of immune response to hepatitis B | 24 pages, 10 figures | J. Theor. Biol. 447, 98-110 (2018) | 10.1016/j.jtbi.2018.03.025 | null | q-bio.PE nlin.CD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A new detailed mathematical model for dynamics of immune response to
hepatitis B is proposed, which takes into account contributions from innate and
adaptive immune responses, as well as cytokines. Stability analysis of
different steady states is performed to identify parameter regions where the
model exhibits clearance of infection, maintenance of a chronic infection, or
periodic oscillations. Effects of nucleoside analogues and interferon
treatments are analysed, and the critical drug efficiency is determined.
| [
{
"created": "Fri, 11 May 2018 20:08:19 GMT",
"version": "v1"
}
] | 2018-05-16 | [
[
"Chenar",
"F. Fatehi",
""
],
[
"Kyrychko",
"Y. N.",
""
],
[
"Blyuss",
"K. B.",
""
]
] | A new detailed mathematical model for dynamics of immune response to hepatitis B is proposed, which takes into account contributions from innate and adaptive immune responses, as well as cytokines. Stability analysis of different steady states is performed to identify parameter regions where the model exhibits clearance of infection, maintenance of a chronic infection, or periodic oscillations. Effects of nucleoside analogues and interferon treatments are analysed, and the critical drug efficiency is determined. |
2112.10575 | David Graff | David E. Graff and Connor W. Coley | pyscreener: A Python Wrapper for Computational Docking Software | null | null | 10.21105/joss.03950 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | pyscreener is a Python library that seeks to alleviate the challenges of
large-scale structure-based design using computational docking. It provides a
simple and uniform interface that is agnostic to the backend docking engine
with which to calculate the docking score of a given molecule in a specified
active site. Additionally, pyscreener features first-class support for task
distribution, allowing users to seamlessly scale their code from a local,
multi-core setup to a large, heterogeneous resource allocation.
| [
{
"created": "Fri, 17 Dec 2021 17:40:47 GMT",
"version": "v1"
}
] | 2022-05-05 | [
[
"Graff",
"David E.",
""
],
[
"Coley",
"Connor W.",
""
]
] | pyscreener is a Python library that seeks to alleviate the challenges of large-scale structure-based design using computational docking. It provides a simple and uniform interface that is agnostic to the backend docking engine with which to calculate the docking score of a given molecule in a specified active site. Additionally, pyscreener features first-class support for task distribution, allowing users to seamlessly scale their code from a local, multi-core setup to a large, heterogeneous resource allocation. |
q-bio/0609002 | Stuart Borrett | Stuart R. Borrett, Brian D. Fath, Bernard C. Patten | Functional Integration of Ecological Networks through Pathway
Proliferation | 29 pages, 2 figures, 3 tables, Submitted to Journal of Theoretical
Biology | Journal of Theoretical Biology 245: 98-111 | 10.1016/j.jtbi.2006.09.024 | null | q-bio.PE q-bio.QM | null | Large-scale structural patterns commonly occur in network models of complex
systems including a skewed node degree distribution and small-world topology.
These patterns suggest common organizational constraints and similar functional
consequences. Here, we investigate a structural pattern termed pathway
proliferation. Previous research enumerating pathways that link species
determined that as pathway length increases, the number of pathways tends to
increase without bound. We hypothesize that this pathway proliferation
influences the flow of energy, matter, and information in ecosystems. In this
paper, we clarify the pathway proliferation concept, introduce a measure of the
node--node proliferation rate, describe factors influencing the rate, and
characterize it in 17 large empirical food-webs. During this investigation, we
uncovered a modular organization within these systems. Over half of the
food-webs were composed of one or more subgroups that were strongly connected
internally, but weakly connected to the rest of the system. Further, these
modules had distinct proliferation rates. We conclude that pathway
proliferation in ecological networks reveals subgroups of species that will be
functionally integrated through cyclic indirect effects.
| [
{
"created": "Sat, 2 Sep 2006 20:10:32 GMT",
"version": "v1"
}
] | 2011-04-04 | [
[
"Borrett",
"Stuart R.",
""
],
[
"Fath",
"Brian D.",
""
],
[
"Patten",
"Bernard C.",
""
]
] | Large-scale structural patterns commonly occur in network models of complex systems including a skewed node degree distribution and small-world topology. These patterns suggest common organizational constraints and similar functional consequences. Here, we investigate a structural pattern termed pathway proliferation. Previous research enumerating pathways that link species determined that as pathway length increases, the number of pathways tends to increase without bound. We hypothesize that this pathway proliferation influences the flow of energy, matter, and information in ecosystems. In this paper, we clarify the pathway proliferation concept, introduce a measure of the node--node proliferation rate, describe factors influencing the rate, and characterize it in 17 large empirical food-webs. During this investigation, we uncovered a modular organization within these systems. Over half of the food-webs were composed of one or more subgroups that were strongly connected internally, but weakly connected to the rest of the system. Further, these modules had distinct proliferation rates. We conclude that pathway proliferation in ecological networks reveals subgroups of species that will be functionally integrated through cyclic indirect effects. |
1607.04463 | Marzia Di Filippo | M. Di Filippo, C. Damiani, R. Colombo, D. Pescini, G. Mauri | Constraint-based modeling and simulation of cell populations | null | null | null | null | q-bio.QM q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The intratumor heterogeneity has been recognized to characterize cancer cells
impairing the efficacy of cancer treatments. We here propose an extension of
constraint-based modeling approach in order to simulate metabolism of cell
populations with the aim to provide a more complete characterization of these
systems, especially focusing on the relationships among their components. We
tested our methodology by using a toy-model and taking into account the main
metabolic pathways involved in cancer metabolic rewiring. This toy-model is
used as individual to construct a population model characterized by multiple
interacting individuals, all having the same topology and stoichiometry, and
sharing the same nutrients supply. We observed that, in our population, cancer
cells cooperate with each other to reach a common objective, but without
necessarily having the same metabolic traits. We also noticed that the
heterogeneity emerging from the population model is due to the mismatch between
the objective of the individual members and the objective of the entire
population.
| [
{
"created": "Fri, 15 Jul 2016 11:18:36 GMT",
"version": "v1"
}
] | 2016-07-18 | [
[
"Di Filippo",
"M.",
""
],
[
"Damiani",
"C.",
""
],
[
"Colombo",
"R.",
""
],
[
"Pescini",
"D.",
""
],
[
"Mauri",
"G.",
""
]
] | The intratumor heterogeneity has been recognized to characterize cancer cells impairing the efficacy of cancer treatments. We here propose an extension of constraint-based modeling approach in order to simulate metabolism of cell populations with the aim to provide a more complete characterization of these systems, especially focusing on the relationships among their components. We tested our methodology by using a toy-model and taking into account the main metabolic pathways involved in cancer metabolic rewiring. This toy-model is used as individual to construct a population model characterized by multiple interacting individuals, all having the same topology and stoichiometry, and sharing the same nutrients supply. We observed that, in our population, cancer cells cooperate with each other to reach a common objective, but without necessarily having the same metabolic traits. We also noticed that the heterogeneity emerging from the population model is due to the mismatch between the objective of the individual members and the objective of the entire population. |
2301.02433 | Norichika Ogata | Norichika Ogata | The Growing Liberality Observed in Primary Animal and Plant Cultures is
Common to the Social Amoeba | 2 pages, 1 figure | null | null | null | q-bio.CB | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Tissue culture environment liberates cells from ordinary laws of
multi-cellular organisms. This liberation enables cells several behaviors, such
as proliferation, dedifferentiation, acquisition of pluripotency,
immortalization, and reprogramming. Recently, the quantitative value of
cellular dedifferentiation and differentiation was defined as liberality, which
is measurable as Shannon entropy of numerical transcriptome data and Lempel-Zip
complexity of nucleotide sequence transcriptome data. The increasing liberality
induced by the culture environment had first been observed in animal cells and
had reconfirmed in plant cells. The phenomena may be common across the kingdom,
also in a social amoeba. We measured the liberality of the social amoeba which
disaggregated from multicellular aggregates and transferred into a liquid
medium.
| [
{
"created": "Fri, 6 Jan 2023 09:36:52 GMT",
"version": "v1"
}
] | 2023-01-09 | [
[
"Ogata",
"Norichika",
""
]
] | Tissue culture environment liberates cells from ordinary laws of multi-cellular organisms. This liberation enables cells several behaviors, such as proliferation, dedifferentiation, acquisition of pluripotency, immortalization, and reprogramming. Recently, the quantitative value of cellular dedifferentiation and differentiation was defined as liberality, which is measurable as Shannon entropy of numerical transcriptome data and Lempel-Zip complexity of nucleotide sequence transcriptome data. The increasing liberality induced by the culture environment had first been observed in animal cells and had reconfirmed in plant cells. The phenomena may be common across the kingdom, also in a social amoeba. We measured the liberality of the social amoeba which disaggregated from multicellular aggregates and transferred into a liquid medium. |
1811.09366 | Shulu Feng | Shulu Feng, Richard A. Friesner | Prediction of Cytochrome P450-Mediated Metabolism Using a Combination of
QSAR Derived Reactivity and Induced Fit Docking | null | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Prediction of metabolism in cytochrome P450s remains to be a crucial yet
challenging topic in discovering and designing drugs, agrochemicals and
nutritional supplements. The problem is challenging because the rate of P450
metabolism depends upon both the intrinsic chemical reactivity of the site and
the protein-ligand geometry that is energetically accessible in the active site
of a given P450 isozyme. We have addressed this problem using a two-level
screening system. The first level implements an empirical QSAR-based scoring
function employing the local chemical motifs to characterize the intrinsic
reactivity. The second level uses molecular docking and molecular mechanics to
account for the geometrical effects, including induced-fit effects in the
protein which can be very important in P450 interactions with ligands. This
approach has achieved high accuracy for both the P450 3A4 and 2D6 isoforms. In
identifying at least one metabolic site in the top two ranked positions, the
prediction rate can reach as high as 92.7% for the test set of isoform 3A4. For
the 2D6 isoform, 100% accuracy is achieved on this basic evaluation metric,
and, because this active site is considerably smaller and more selective than
3A4, very high precision is attained for full prediction of all metabolic
sites. The method also requires considerably less CPU time than our previous
efforts, which involved a large number of expensive simulations for each ligand
to be evaluated. After screening using the empirical score function, only a few
best candidates are left for each ligand, making the number of necessary
estimations in the second level very small, which significantly reduces the
computation time.
| [
{
"created": "Fri, 23 Nov 2018 05:44:38 GMT",
"version": "v1"
}
] | 2018-11-26 | [
[
"Feng",
"Shulu",
""
],
[
"Friesner",
"Richard A.",
""
]
] | Prediction of metabolism in cytochrome P450s remains to be a crucial yet challenging topic in discovering and designing drugs, agrochemicals and nutritional supplements. The problem is challenging because the rate of P450 metabolism depends upon both the intrinsic chemical reactivity of the site and the protein-ligand geometry that is energetically accessible in the active site of a given P450 isozyme. We have addressed this problem using a two-level screening system. The first level implements an empirical QSAR-based scoring function employing the local chemical motifs to characterize the intrinsic reactivity. The second level uses molecular docking and molecular mechanics to account for the geometrical effects, including induced-fit effects in the protein which can be very important in P450 interactions with ligands. This approach has achieved high accuracy for both the P450 3A4 and 2D6 isoforms. In identifying at least one metabolic site in the top two ranked positions, the prediction rate can reach as high as 92.7% for the test set of isoform 3A4. For the 2D6 isoform, 100% accuracy is achieved on this basic evaluation metric, and, because this active site is considerably smaller and more selective than 3A4, very high precision is attained for full prediction of all metabolic sites. The method also requires considerably less CPU time than our previous efforts, which involved a large number of expensive simulations for each ligand to be evaluated. After screening using the empirical score function, only a few best candidates are left for each ligand, making the number of necessary estimations in the second level very small, which significantly reduces the computation time. |
1007.0471 | Dalibor Stys | Dalibor Stys | Technical performance and interpretation of physical experiment in
problems of cell biology | 23 pages | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The lecture summarises main results of my team over last five years in the
field of technical experiment design and interpretation of results of
experiments for cell bi-ology. I introduce the theoretical concept of the
experiment, based mainly on ideqas of stochastic systems theory, and confront
it with general ideas of systems theory. In the next part I introduce available
experiments and discuss their information con-tent. Namely, I show that light
microscopy may be designed to give resolution com-parable to that of electron
microscopy and that may be used for experiments using living cells. I show
avenues to objective analysis of cell behavior observation. I pro-pose new
microscope design, which shall combine advantages of all methods, and steps to
be taken to build a model of living cells with predictive power for practical
use
| [
{
"created": "Sat, 3 Jul 2010 06:46:37 GMT",
"version": "v1"
}
] | 2010-07-06 | [
[
"Stys",
"Dalibor",
""
]
] | The lecture summarises main results of my team over last five years in the field of technical experiment design and interpretation of results of experiments for cell bi-ology. I introduce the theoretical concept of the experiment, based mainly on ideqas of stochastic systems theory, and confront it with general ideas of systems theory. In the next part I introduce available experiments and discuss their information con-tent. Namely, I show that light microscopy may be designed to give resolution com-parable to that of electron microscopy and that may be used for experiments using living cells. I show avenues to objective analysis of cell behavior observation. I pro-pose new microscope design, which shall combine advantages of all methods, and steps to be taken to build a model of living cells with predictive power for practical use |
1311.3769 | Thierry Rabilloud | Thierry Rabilloud (LCBM) | When 2D is not enough, go for an extra dimension | null | PROTEOMICS 13, 14 (2013) 2065-8 | 10.1002/pmic.201300215 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The use of an extra SDS separation in a different buffer system provide a
technique for deconvoluting 2D gel spots made of several proteins (Colignon et
al. Proteomics, 2013, 13, 2077-2082). This technique keeps the quantitative
analysis of the protein amounts and combines it with a strongly improved
identification process by mass spectrometry, removing identification
ambiguities in most cases. In some favorable cases, posttranslational variants
can be separated by this procedure. This versatile and easy to use technique is
anticipated to be a very valuable addition to the toolbox used in 2D gel-based
proteomics.
| [
{
"created": "Fri, 15 Nov 2013 08:39:25 GMT",
"version": "v1"
}
] | 2013-11-18 | [
[
"Rabilloud",
"Thierry",
"",
"LCBM"
]
] | The use of an extra SDS separation in a different buffer system provide a technique for deconvoluting 2D gel spots made of several proteins (Colignon et al. Proteomics, 2013, 13, 2077-2082). This technique keeps the quantitative analysis of the protein amounts and combines it with a strongly improved identification process by mass spectrometry, removing identification ambiguities in most cases. In some favorable cases, posttranslational variants can be separated by this procedure. This versatile and easy to use technique is anticipated to be a very valuable addition to the toolbox used in 2D gel-based proteomics. |
2211.16742 | Bozhen Hu | Bozhen Hu, Jun Xia, Jiangbin Zheng, Cheng Tan, Yufei Huang, Yongjie
Xu, Stan Z. Li | Protein Language Models and Structure Prediction: Connection and
Progression | null | null | null | null | q-bio.QM cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The prediction of protein structures from sequences is an important task for
function prediction, drug design, and related biological processes
understanding. Recent advances have proved the power of language models (LMs)
in processing the protein sequence databases, which inherit the advantages of
attention networks and capture useful information in learning representations
for proteins. The past two years have witnessed remarkable success in tertiary
protein structure prediction (PSP), including evolution-based and
single-sequence-based PSP. It seems that instead of using energy-based models
and sampling procedures, protein language model (pLM)-based pipelines have
emerged as mainstream paradigms in PSP. Despite the fruitful progress, the PSP
community needs a systematic and up-to-date survey to help bridge the gap
between LMs in the natural language processing (NLP) and PSP domains and
introduce their methodologies, advancements and practical applications. To this
end, in this paper, we first introduce the similarities between protein and
human languages that allow LMs extended to pLMs, and applied to protein
databases. Then, we systematically review recent advances in LMs and pLMs from
the perspectives of network architectures, pre-training strategies,
applications, and commonly-used protein databases. Next, different types of
methods for PSP are discussed, particularly how the pLM-based architectures
function in the process of protein folding. Finally, we identify challenges
faced by the PSP community and foresee promising research directions along with
the advances of pLMs. This survey aims to be a hands-on guide for researchers
to understand PSP methods, develop pLMs and tackle challenging problems in this
field for practical purposes.
| [
{
"created": "Wed, 30 Nov 2022 04:58:54 GMT",
"version": "v1"
}
] | 2022-12-01 | [
[
"Hu",
"Bozhen",
""
],
[
"Xia",
"Jun",
""
],
[
"Zheng",
"Jiangbin",
""
],
[
"Tan",
"Cheng",
""
],
[
"Huang",
"Yufei",
""
],
[
"Xu",
"Yongjie",
""
],
[
"Li",
"Stan Z.",
""
]
] | The prediction of protein structures from sequences is an important task for function prediction, drug design, and related biological processes understanding. Recent advances have proved the power of language models (LMs) in processing the protein sequence databases, which inherit the advantages of attention networks and capture useful information in learning representations for proteins. The past two years have witnessed remarkable success in tertiary protein structure prediction (PSP), including evolution-based and single-sequence-based PSP. It seems that instead of using energy-based models and sampling procedures, protein language model (pLM)-based pipelines have emerged as mainstream paradigms in PSP. Despite the fruitful progress, the PSP community needs a systematic and up-to-date survey to help bridge the gap between LMs in the natural language processing (NLP) and PSP domains and introduce their methodologies, advancements and practical applications. To this end, in this paper, we first introduce the similarities between protein and human languages that allow LMs extended to pLMs, and applied to protein databases. Then, we systematically review recent advances in LMs and pLMs from the perspectives of network architectures, pre-training strategies, applications, and commonly-used protein databases. Next, different types of methods for PSP are discussed, particularly how the pLM-based architectures function in the process of protein folding. Finally, we identify challenges faced by the PSP community and foresee promising research directions along with the advances of pLMs. This survey aims to be a hands-on guide for researchers to understand PSP methods, develop pLMs and tackle challenging problems in this field for practical purposes. |
1503.01104 | Alexander Alemi | Alexander A. Alemi, Matthew Bierbaum, Christopher R. Myers, James P.
Sethna | You Can Run, You Can Hide: The Epidemiology and Statistical Mechanics of
Zombies | 13 pages, 13 figures | Phys. Rev. E 92, 052801 (2015) | 10.1103/PhysRevE.92.052801 | null | q-bio.PE physics.pop-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We use a popular fictional disease, zombies, in order to introduce techniques
used in modern epidemiology modelling, and ideas and techniques used in the
numerical study of critical phenomena. We consider variants of zombie models,
from fully connected continuous time dynamics to a full scale exact stochastic
dynamic simulation of a zombie outbreak on the continental United States. Along
the way, we offer a closed form analytical expression for the fully connected
differential equation, and demonstrate that the single person per site two
dimensional square lattice version of zombies lies in the percolation
universality class. We end with a quantitative study of the full scale US
outbreak, including the average susceptibility of different geographical
regions.
| [
{
"created": "Wed, 4 Mar 2015 00:36:09 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Mar 2015 03:24:37 GMT",
"version": "v2"
},
{
"created": "Thu, 4 Jun 2015 19:26:09 GMT",
"version": "v3"
}
] | 2016-06-14 | [
[
"Alemi",
"Alexander A.",
""
],
[
"Bierbaum",
"Matthew",
""
],
[
"Myers",
"Christopher R.",
""
],
[
"Sethna",
"James P.",
""
]
] | We use a popular fictional disease, zombies, in order to introduce techniques used in modern epidemiology modelling, and ideas and techniques used in the numerical study of critical phenomena. We consider variants of zombie models, from fully connected continuous time dynamics to a full scale exact stochastic dynamic simulation of a zombie outbreak on the continental United States. Along the way, we offer a closed form analytical expression for the fully connected differential equation, and demonstrate that the single person per site two dimensional square lattice version of zombies lies in the percolation universality class. We end with a quantitative study of the full scale US outbreak, including the average susceptibility of different geographical regions. |
2210.09666 | Nen Saito | Yuji Omachi, Nen Saito, and Chikara Furusawa | Rare-Event Sampling Analysis Uncovers the Fitness Landscape of the
Genetic Code | 8 pages, 3 figures | null | 10.1371/journal.pcbi.1011034 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The genetic code refers to a rule that maps 64 codons to 20 amino acids.
Nearly all organisms, with few exceptions, share the same genetic code, the
standard genetic code (SGC). While it remains unclear why this universal code
has arisen and been maintained during evolution, it may have been preserved
under selection pressure. Theoretical studies comparing the SGC and numerically
created hypothetical random genetic codes have suggested that the SGC has been
subject to strong selection pressure for being robust against translation
errors. However, these prior studies have searched for random genetic codes in
only a small subspace of the possible code space due to limitations in
computation time. Thus, how the genetic code has evolved, and the
characteristics of the genetic code fitness landscape, remain unclear. By
applying multicanonical Monte Carlo, an efficient rare-event sampling method,
we efficiently sampled random codes from a much broader random ensemble of
genetic codes than in previous studies, estimating that only one out of every
$10^{20}$ random codes is more robust than the SGC. This estimate is
significantly smaller than the previous estimate, one in a million. We also
characterized the fitness landscape of the genetic code that has four major
fitness peaks, one of which includes the SGC. Furthermore, genetic algorithm
analysis revealed that evolution under such a multi-peaked fitness landscape
could be strongly biased toward a narrow peak, in an evolutionary
path-dependent manner.
| [
{
"created": "Tue, 18 Oct 2022 08:08:45 GMT",
"version": "v1"
}
] | 2023-05-10 | [
[
"Omachi",
"Yuji",
""
],
[
"Saito",
"Nen",
""
],
[
"Furusawa",
"Chikara",
""
]
] | The genetic code refers to a rule that maps 64 codons to 20 amino acids. Nearly all organisms, with few exceptions, share the same genetic code, the standard genetic code (SGC). While it remains unclear why this universal code has arisen and been maintained during evolution, it may have been preserved under selection pressure. Theoretical studies comparing the SGC and numerically created hypothetical random genetic codes have suggested that the SGC has been subject to strong selection pressure for being robust against translation errors. However, these prior studies have searched for random genetic codes in only a small subspace of the possible code space due to limitations in computation time. Thus, how the genetic code has evolved, and the characteristics of the genetic code fitness landscape, remain unclear. By applying multicanonical Monte Carlo, an efficient rare-event sampling method, we efficiently sampled random codes from a much broader random ensemble of genetic codes than in previous studies, estimating that only one out of every $10^{20}$ random codes is more robust than the SGC. This estimate is significantly smaller than the previous estimate, one in a million. We also characterized the fitness landscape of the genetic code that has four major fitness peaks, one of which includes the SGC. Furthermore, genetic algorithm analysis revealed that evolution under such a multi-peaked fitness landscape could be strongly biased toward a narrow peak, in an evolutionary path-dependent manner. |
2407.19851 | Li Chen | Guozhong Zheng, Jiqiang Zhang, Shengfeng Deng, Weiran Cai, Li Chen | Evolution of cooperation in the public goods game with Q-learning | 16 pages, 12 figures, comments are appreciated | null | null | null | q-bio.PE cond-mat.stat-mech nlin.AO | http://creativecommons.org/licenses/by/4.0/ | Recent paradigm shifts from imitation learning to reinforcement learning (RL)
is shown to be productive in understanding human behaviors. In the RL paradigm,
individuals search for optimal strategies through interaction with the
environment to make decisions. This implies that gathering, processing, and
utilizing information from their surroundings are crucial. However, existing
studies typically study pairwise games such as the prisoners' dilemma and
employ a self-regarding setup, where individuals play against one opponent
based solely on their own strategies, neglecting the environmental information.
In this work, we investigate the evolution of cooperation with the multiplayer
game -- the public goods game using the Q-learning algorithm by leveraging the
environmental information. Specifically, the decision-making of players is
based upon the cooperation information in their neighborhood. Our results show
that cooperation is more likely to emerge compared to the case of imitation
learning by using Fermi rule. Of particular interest is the observation of an
anomalous non-monotonic dependence which is revealed when voluntary
participation is further introduced. The analysis of the Q-table explains the
mechanisms behind the cooperation evolution. Our findings indicate the
fundamental role of environment information in the RL paradigm to understand
the evolution of cooperation, and human behaviors in general.
| [
{
"created": "Mon, 29 Jul 2024 10:09:07 GMT",
"version": "v1"
}
] | 2024-07-30 | [
[
"Zheng",
"Guozhong",
""
],
[
"Zhang",
"Jiqiang",
""
],
[
"Deng",
"Shengfeng",
""
],
[
"Cai",
"Weiran",
""
],
[
"Chen",
"Li",
""
]
] | Recent paradigm shifts from imitation learning to reinforcement learning (RL) is shown to be productive in understanding human behaviors. In the RL paradigm, individuals search for optimal strategies through interaction with the environment to make decisions. This implies that gathering, processing, and utilizing information from their surroundings are crucial. However, existing studies typically study pairwise games such as the prisoners' dilemma and employ a self-regarding setup, where individuals play against one opponent based solely on their own strategies, neglecting the environmental information. In this work, we investigate the evolution of cooperation with the multiplayer game -- the public goods game using the Q-learning algorithm by leveraging the environmental information. Specifically, the decision-making of players is based upon the cooperation information in their neighborhood. Our results show that cooperation is more likely to emerge compared to the case of imitation learning by using Fermi rule. Of particular interest is the observation of an anomalous non-monotonic dependence which is revealed when voluntary participation is further introduced. The analysis of the Q-table explains the mechanisms behind the cooperation evolution. Our findings indicate the fundamental role of environment information in the RL paradigm to understand the evolution of cooperation, and human behaviors in general. |
1610.02308 | Henning Dickten | Henning Dickten and Klaus Lehnertz | Identifying delayed directional couplings with symbolic transfer entropy | null | Phys. Rev. E 90, 062706 (2014) | 10.1103/PhysRevE.90.062706 | null | q-bio.NC nlin.CD physics.comp-ph physics.data-an physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a straightforward extension of symbolic transfer entropy to enable
the investigation of delayed directional relationships between coupled
dynamical systems from time series. Analyzing time series from chaotic model
systems, we demonstrate the applicability and limitations of our approach. Our
findings obtained from applying our method to infer delayed directed
interactions in the human epileptic brain underline the importance of our
approach for improving the construction of functional network structures from
data.
| [
{
"created": "Thu, 6 Oct 2016 12:56:27 GMT",
"version": "v1"
}
] | 2016-10-10 | [
[
"Dickten",
"Henning",
""
],
[
"Lehnertz",
"Klaus",
""
]
] | We propose a straightforward extension of symbolic transfer entropy to enable the investigation of delayed directional relationships between coupled dynamical systems from time series. Analyzing time series from chaotic model systems, we demonstrate the applicability and limitations of our approach. Our findings obtained from applying our method to infer delayed directed interactions in the human epileptic brain underline the importance of our approach for improving the construction of functional network structures from data. |
1103.4339 | Andrii Mironchenko | Andrii Mironchenko and Jan Kozlowski | Optimal allocation patterns and optimal seed mass of a perennial plant | null | null | null | null | q-bio.PE cs.SY math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel optimal allocation model for perennial plants, in which
assimilates are not allocated directly to vegetative or reproductive parts but
instead go first to a storage compartment from where they are then optimally
redistributed. We do not restrict considerations purely to periods favourable
for photosynthesis, as it was done in published models of perennial species,
but analyse the whole life period of a perennial plant. As a result, we obtain
the general scheme of perennial plant development, for which annual and
monocarpic strategies are special cases.
We not only re-derive predictions from several previous optimal allocation
models, but also obtain more information about plants' strategies during
transitions between favourable and unfavourable seasons. One of the model's
predictions is that a plant can begin to re-establish vegetative tissues from
storage, some time before the beginning of favourable conditions, which in turn
allows for better production potential when conditions become better. By means
of numerical examples we show that annual plants with single or multiple
reproduction periods, monocarps, evergreen perennials and polycarpic perennials
can be studied successfully with the help of our unified model.
Finally, we build a bridge between optimal allocation models and models
describing trade-offs between size and the number of seeds: a modelled plant
can control the distribution of not only allocated carbohydrates but also seed
size. We provide sufficient conditions for the optimality of producing the
smallest and largest seeds possible.
| [
{
"created": "Tue, 22 Mar 2011 18:25:16 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Oct 2012 12:50:57 GMT",
"version": "v2"
},
{
"created": "Mon, 30 Sep 2013 16:12:26 GMT",
"version": "v3"
}
] | 2013-10-01 | [
[
"Mironchenko",
"Andrii",
""
],
[
"Kozlowski",
"Jan",
""
]
] | We present a novel optimal allocation model for perennial plants, in which assimilates are not allocated directly to vegetative or reproductive parts but instead go first to a storage compartment from where they are then optimally redistributed. We do not restrict considerations purely to periods favourable for photosynthesis, as it was done in published models of perennial species, but analyse the whole life period of a perennial plant. As a result, we obtain the general scheme of perennial plant development, for which annual and monocarpic strategies are special cases. We not only re-derive predictions from several previous optimal allocation models, but also obtain more information about plants' strategies during transitions between favourable and unfavourable seasons. One of the model's predictions is that a plant can begin to re-establish vegetative tissues from storage, some time before the beginning of favourable conditions, which in turn allows for better production potential when conditions become better. By means of numerical examples we show that annual plants with single or multiple reproduction periods, monocarps, evergreen perennials and polycarpic perennials can be studied successfully with the help of our unified model. Finally, we build a bridge between optimal allocation models and models describing trade-offs between size and the number of seeds: a modelled plant can control the distribution of not only allocated carbohydrates but also seed size. We provide sufficient conditions for the optimality of producing the smallest and largest seeds possible. |
1105.4961 | Mikko Tuomi | Mikko Tuomi, Jussi Rasinm\"aki, Anna Repo, Pekka Vanhala, Jari Liski | Soil carbon model Yasso07 graphical user interface | 15 pages, 1 figure. Accepted for publication in Environmental
Modelling and Software | null | null | null | q-bio.QM physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article, we present a graphical user interface software for the
litter decomposition and soil carbon model Yasso07 and an overview of the
principles and formulae it is based on. The software can be used to test the
model and use it in simple applications. Yasso07 is applicable to upland soils
of different ecosystems worldwide, because it has been developed using data
covering the global climate conditions and representing various ecosystem
types. As input information, Yasso07 requires data on litter input to soil,
climate conditions, and land-use change if any. The model predictions are given
as probability densities representing the uncertainties in the parameter values
of the model and those in the input data - the user interface calculates these
densities using a built-in Monte Carlo simulation.
| [
{
"created": "Wed, 25 May 2011 08:18:07 GMT",
"version": "v1"
}
] | 2011-05-26 | [
[
"Tuomi",
"Mikko",
""
],
[
"Rasinmäki",
"Jussi",
""
],
[
"Repo",
"Anna",
""
],
[
"Vanhala",
"Pekka",
""
],
[
"Liski",
"Jari",
""
]
] | In this article, we present a graphical user interface software for the litter decomposition and soil carbon model Yasso07 and an overview of the principles and formulae it is based on. The software can be used to test the model and use it in simple applications. Yasso07 is applicable to upland soils of different ecosystems worldwide, because it has been developed using data covering the global climate conditions and representing various ecosystem types. As input information, Yasso07 requires data on litter input to soil, climate conditions, and land-use change if any. The model predictions are given as probability densities representing the uncertainties in the parameter values of the model and those in the input data - the user interface calculates these densities using a built-in Monte Carlo simulation. |
1611.06973 | Seymour Knowles-Barley | Seymour Knowles-Barley, Verena Kaynig, Thouis Ray Jones, Alyssa
Wilson, Joshua Morgan, Dongil Lee, Daniel Berger, Narayanan Kasthuri, Jeff W.
Lichtman, Hanspeter Pfister | RhoanaNet Pipeline: Dense Automatic Neural Annotation | 13 pages, 4 figures | null | null | null | q-bio.NC cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reconstructing a synaptic wiring diagram, or connectome, from electron
microscopy (EM) images of brain tissue currently requires many hours of manual
annotation or proofreading (Kasthuri and Lichtman, 2010; Lichtman and Sanes,
2008; Seung, 2009). The desire to reconstruct ever larger and more complex
networks has pushed the collection of ever larger EM datasets. A cubic
millimeter of raw imaging data would take up 1 PB of storage and present an
annotation project that would be impractical without relying heavily on
automatic segmentation methods. The RhoanaNet image processing pipeline was
developed to automatically segment large volumes of EM data and ease the burden
of manual proofreading and annotation. Based on (Kaynig et al., 2015), we
updated every stage of the software pipeline to provide better throughput
performance and higher quality segmentation results. We used state of the art
deep learning techniques to generate improved membrane probability maps, and
Gala (Nunez-Iglesias et al., 2014) was used to agglomerate 2D segments into 3D
objects.
We applied the RhoanaNet pipeline to four densely annotated EM datasets, two
from mouse cortex, one from cerebellum and one from mouse lateral geniculate
nucleus (LGN). All training and test data is made available for benchmark
comparisons. The best segmentation results obtained gave
$V^\text{Info}_\text{F-score}$ scores of 0.9054 and 09182 for the cortex
datasets, 0.9438 for LGN, and 0.9150 for Cerebellum.
The RhoanaNet pipeline is open source software. All source code, training
data, test data, and annotations for all four benchmark datasets are available
at www.rhoana.org.
| [
{
"created": "Mon, 21 Nov 2016 19:48:29 GMT",
"version": "v1"
}
] | 2016-11-22 | [
[
"Knowles-Barley",
"Seymour",
""
],
[
"Kaynig",
"Verena",
""
],
[
"Jones",
"Thouis Ray",
""
],
[
"Wilson",
"Alyssa",
""
],
[
"Morgan",
"Joshua",
""
],
[
"Lee",
"Dongil",
""
],
[
"Berger",
"Daniel",
""
],
[
"Kasthuri",
"Narayanan",
""
],
[
"Lichtman",
"Jeff W.",
""
],
[
"Pfister",
"Hanspeter",
""
]
] | Reconstructing a synaptic wiring diagram, or connectome, from electron microscopy (EM) images of brain tissue currently requires many hours of manual annotation or proofreading (Kasthuri and Lichtman, 2010; Lichtman and Sanes, 2008; Seung, 2009). The desire to reconstruct ever larger and more complex networks has pushed the collection of ever larger EM datasets. A cubic millimeter of raw imaging data would take up 1 PB of storage and present an annotation project that would be impractical without relying heavily on automatic segmentation methods. The RhoanaNet image processing pipeline was developed to automatically segment large volumes of EM data and ease the burden of manual proofreading and annotation. Based on (Kaynig et al., 2015), we updated every stage of the software pipeline to provide better throughput performance and higher quality segmentation results. We used state of the art deep learning techniques to generate improved membrane probability maps, and Gala (Nunez-Iglesias et al., 2014) was used to agglomerate 2D segments into 3D objects. We applied the RhoanaNet pipeline to four densely annotated EM datasets, two from mouse cortex, one from cerebellum and one from mouse lateral geniculate nucleus (LGN). All training and test data is made available for benchmark comparisons. The best segmentation results obtained gave $V^\text{Info}_\text{F-score}$ scores of 0.9054 and 09182 for the cortex datasets, 0.9438 for LGN, and 0.9150 for Cerebellum. The RhoanaNet pipeline is open source software. All source code, training data, test data, and annotations for all four benchmark datasets are available at www.rhoana.org. |
0811.1005 | Michel Gauthier | Michel G. Gauthier, Gary W. Slater | Non-driven polymer translocation through a nanopore: computational
evidence that the escape and relaxation processes are coupled | null | null | 10.1103/PhysRevE.79.021802 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most of the theoretical models describing the translocation of a polymer
chain through a nanopore use the hypothesis that the polymer is always relaxed
during the complete process. In other words, models generally assume that the
characteristic relaxation time of the chain is small enough compared to the
translocation time that non-equilibrium molecular conformations can be ignored.
In this paper, we use Molecular Dynamics simulations to directly test this
hypothesis by looking at the escape time of unbiased polymer chains starting
with different initial conditions. We find that the translocation process is
not quite in equilibrium for the systems studied, even though the translocation
time tau is about 10 times larger than the relaxation time tau_r. Our most
striking result is the observation that the last half of the chain escapes in
less than ~12% of the total escape time, which implies that there is a large
acceleration of the chain at the end of its escape from the channel.
| [
{
"created": "Thu, 6 Nov 2008 17:47:45 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Gauthier",
"Michel G.",
""
],
[
"Slater",
"Gary W.",
""
]
] | Most of the theoretical models describing the translocation of a polymer chain through a nanopore use the hypothesis that the polymer is always relaxed during the complete process. In other words, models generally assume that the characteristic relaxation time of the chain is small enough compared to the translocation time that non-equilibrium molecular conformations can be ignored. In this paper, we use Molecular Dynamics simulations to directly test this hypothesis by looking at the escape time of unbiased polymer chains starting with different initial conditions. We find that the translocation process is not quite in equilibrium for the systems studied, even though the translocation time tau is about 10 times larger than the relaxation time tau_r. Our most striking result is the observation that the last half of the chain escapes in less than ~12% of the total escape time, which implies that there is a large acceleration of the chain at the end of its escape from the channel. |
0706.3684 | Pete Donnell | Pete Donnell, Murad Banaji and Stephen Baigent | Stability in generic mitochondrial models | 22 pages, 1 figure. Submitted to Mathematical Biosciences | null | null | null | q-bio.QM q-bio.SC | null | In this paper, we use a variety of mathematical techniques to explore
existence, local stability, and global stability of equilibria in abstract
models of mitochondrial metabolism. The class of models constructed is defined
by the biological description of the system, with minimal mathematical
assumptions. The key features are an electron transport chain coupled to a
process of charge translocation across a membrane. In the absence of charge
translocation these models have previously been shown to behave in a very
simple manner with a single, globally stable equilibrium. We show that with
charge translocation the conclusion about a unique equilibrium remains true,
but local and global stability do not necessarily follow. In sufficiently low
dimensions - i.e. for short electron transport chains - it is possible to make
claims about local and global stability of the equilibrium. On the other hand,
for longer chains, these general claims are no longer valid. Some particular
conditions which ensure stability of the equilibrium for chains of arbitrary
length are presented.
| [
{
"created": "Mon, 25 Jun 2007 17:55:06 GMT",
"version": "v1"
}
] | 2007-06-26 | [
[
"Donnell",
"Pete",
""
],
[
"Banaji",
"Murad",
""
],
[
"Baigent",
"Stephen",
""
]
] | In this paper, we use a variety of mathematical techniques to explore existence, local stability, and global stability of equilibria in abstract models of mitochondrial metabolism. The class of models constructed is defined by the biological description of the system, with minimal mathematical assumptions. The key features are an electron transport chain coupled to a process of charge translocation across a membrane. In the absence of charge translocation these models have previously been shown to behave in a very simple manner with a single, globally stable equilibrium. We show that with charge translocation the conclusion about a unique equilibrium remains true, but local and global stability do not necessarily follow. In sufficiently low dimensions - i.e. for short electron transport chains - it is possible to make claims about local and global stability of the equilibrium. On the other hand, for longer chains, these general claims are no longer valid. Some particular conditions which ensure stability of the equilibrium for chains of arbitrary length are presented. |
1006.0408 | Franziska Hinkelmann | Franziska Hinkelmann, David Murrugarra, Abdul Salam Jarrah, Reinhard
Laubenbacher | A Mathematical Framework for Agent Based Models of Complex Biological
Networks | To appear in Bulletin of Mathematical Biology | null | 10.1007/S11538-010-9582-8 | null | q-bio.QM cs.MA physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Agent-based modeling and simulation is a useful method to study biological
phenomena in a wide range of fields, from molecular biology to ecology. Since
there is currently no agreed-upon standard way to specify such models it is not
always easy to use published models. Also, since model descriptions are not
usually given in mathematical terms, it is difficult to bring mathematical
analysis tools to bear, so that models are typically studied through
simulation. In order to address this issue, Grimm et al. proposed a protocol
for model specification, the so-called ODD protocol, which provides a standard
way to describe models. This paper proposes an addition to the ODD protocol
which allows the description of an agent-based model as a dynamical system,
which provides access to computational and theoretical tools for its analysis.
The mathematical framework is that of algebraic models, that is, time-discrete
dynamical systems with algebraic structure. It is shown by way of several
examples how this mathematical specification can help with model analysis.
| [
{
"created": "Wed, 2 Jun 2010 14:39:31 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Jul 2010 02:29:09 GMT",
"version": "v2"
},
{
"created": "Fri, 9 Jul 2010 00:46:57 GMT",
"version": "v3"
},
{
"created": "Sat, 28 Aug 2010 17:58:04 GMT",
"version": "v4"
},
{
"created": "Thu, 9 Sep 2010 16:24:07 GMT",
"version": "v5"
}
] | 2010-10-14 | [
[
"Hinkelmann",
"Franziska",
""
],
[
"Murrugarra",
"David",
""
],
[
"Jarrah",
"Abdul Salam",
""
],
[
"Laubenbacher",
"Reinhard",
""
]
] | Agent-based modeling and simulation is a useful method to study biological phenomena in a wide range of fields, from molecular biology to ecology. Since there is currently no agreed-upon standard way to specify such models it is not always easy to use published models. Also, since model descriptions are not usually given in mathematical terms, it is difficult to bring mathematical analysis tools to bear, so that models are typically studied through simulation. In order to address this issue, Grimm et al. proposed a protocol for model specification, the so-called ODD protocol, which provides a standard way to describe models. This paper proposes an addition to the ODD protocol which allows the description of an agent-based model as a dynamical system, which provides access to computational and theoretical tools for its analysis. The mathematical framework is that of algebraic models, that is, time-discrete dynamical systems with algebraic structure. It is shown by way of several examples how this mathematical specification can help with model analysis. |
1505.06658 | Yasunori Aoki | Yasunori Aoki, Monika Sundqvist, Andrew C. Hooker and Peter Gennemark | PopED lite: an optimal design software for preclinical pharmacokinetic
and pharmacodynamic studies | Submitted to Computer Methods and Programs in Biomedicine | null | null | null | q-bio.QM stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Optimal experimental design approaches are seldom used in pre-clinical drug
discovery. Main reasons for this lack of use are that available software tools
require relatively high insight in optimal design theory, and that the
design-execution cycle of in vivo experiments is short, making time-consuming
optimizations infeasible. We present the publicly available software PopED lite
in order to increase the use of optimal design in pre-clinical drug discovery.
PopED lite is designed to be simple, fast and intuitive. Simple, to give many
users access to basic optimal design calculations. Fast, to fit the short
design-execution cycle and allow interactive experimental design (test one
design, discuss proposed design, test another design, etc). Intuitive, so that
the input to and output from the software can easily be understood by users
without knowledge of the theory of optimal design. In this way, PopED lite is
highly useful in practice and complements existing tools. Key functionality of
PopED lite is demonstrated by three case studies from real drug discovery
projects.
| [
{
"created": "Mon, 25 May 2015 15:07:44 GMT",
"version": "v1"
}
] | 2015-05-26 | [
[
"Aoki",
"Yasunori",
""
],
[
"Sundqvist",
"Monika",
""
],
[
"Hooker",
"Andrew C.",
""
],
[
"Gennemark",
"Peter",
""
]
] | Optimal experimental design approaches are seldom used in pre-clinical drug discovery. Main reasons for this lack of use are that available software tools require relatively high insight in optimal design theory, and that the design-execution cycle of in vivo experiments is short, making time-consuming optimizations infeasible. We present the publicly available software PopED lite in order to increase the use of optimal design in pre-clinical drug discovery. PopED lite is designed to be simple, fast and intuitive. Simple, to give many users access to basic optimal design calculations. Fast, to fit the short design-execution cycle and allow interactive experimental design (test one design, discuss proposed design, test another design, etc). Intuitive, so that the input to and output from the software can easily be understood by users without knowledge of the theory of optimal design. In this way, PopED lite is highly useful in practice and complements existing tools. Key functionality of PopED lite is demonstrated by three case studies from real drug discovery projects. |
1603.04955 | Stephen Pankavich | Paul Diaz, Paul Constantine, Kelsey Kalmbach, Eric Jones, and Stephen
Pankavich | A Modified SEIR Model for the Spread of Ebola in Western Africa and
Metrics for Resource Allocation | 28 pages, 9 figures | null | null | null | q-bio.PE math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A modified, deterministic SEIR model is developed for the 2014 Ebola epidemic
occurring in the West African nations of Guinea, Liberia, and Sierra Leone. The
model describes the dynamical interaction of susceptible and infected
populations, while accounting for the effects of hospitalization and the spread
of disease through interactions with deceased, but infectious, individuals.
Using data from the World Health Organization (WHO), parameters within the
model are fit to recent estimates of infected and deceased cases from each
nation. The model is then analyzed using these parameter values. Finally,
several metrics are proposed to determine which of these nations is in greatest
need of additional resources to combat the spread of infection. These include
local and global sensitivity metrics of both the infected population and the
basic reproduction number with respect to rates of hospitalization and proper
burial.
| [
{
"created": "Wed, 16 Mar 2016 04:15:58 GMT",
"version": "v1"
},
{
"created": "Sat, 22 Oct 2016 20:56:51 GMT",
"version": "v2"
},
{
"created": "Sat, 19 Nov 2016 07:33:42 GMT",
"version": "v3"
}
] | 2016-11-22 | [
[
"Diaz",
"Paul",
""
],
[
"Constantine",
"Paul",
""
],
[
"Kalmbach",
"Kelsey",
""
],
[
"Jones",
"Eric",
""
],
[
"Pankavich",
"Stephen",
""
]
] | A modified, deterministic SEIR model is developed for the 2014 Ebola epidemic occurring in the West African nations of Guinea, Liberia, and Sierra Leone. The model describes the dynamical interaction of susceptible and infected populations, while accounting for the effects of hospitalization and the spread of disease through interactions with deceased, but infectious, individuals. Using data from the World Health Organization (WHO), parameters within the model are fit to recent estimates of infected and deceased cases from each nation. The model is then analyzed using these parameter values. Finally, several metrics are proposed to determine which of these nations is in greatest need of additional resources to combat the spread of infection. These include local and global sensitivity metrics of both the infected population and the basic reproduction number with respect to rates of hospitalization and proper burial. |
1508.03796 | Michael Margaliot | Alon Raveh and Yoram Zarai and Michael Margaliot and Tamir Tuller | Ribosome Flow Model on a Ring | arXiv admin note: substantial text overlap with arXiv:1406.7248 | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The asymmetric simple exclusion process (ASEP) is an important model from
statistical physics describing particles that hop randomly from one site to the
next along an ordered lattice of sites, but only if the next site is empty.
ASEP has been used to model and analyze numerous multiagent systems with local
interactions including the flow of ribosomes along the mRNA strand.
In ASEP with periodic boundary conditions a particle that hops from the last
site returns to the first one. The mean field approximation of this model is
referred to as the ribosome flow model on a ring (RFMR). The RFMR may be used
to model both synthetic and endogenous gene expression regimes.
We analyze the RFMR using the theory of monotone dynamical systems. We show
that it admits a continuum of equilibrium points and that every trajectory
converges to an equilibrium point. Furthermore, we show that it entrains to
periodic transition rates between the sites. We describe the implications of
the analysis results to understanding and engineering cyclic mRNA translation
in-vitro and in-vivo.
| [
{
"created": "Sun, 16 Aug 2015 07:33:50 GMT",
"version": "v1"
}
] | 2015-08-18 | [
[
"Raveh",
"Alon",
""
],
[
"Zarai",
"Yoram",
""
],
[
"Margaliot",
"Michael",
""
],
[
"Tuller",
"Tamir",
""
]
] | The asymmetric simple exclusion process (ASEP) is an important model from statistical physics describing particles that hop randomly from one site to the next along an ordered lattice of sites, but only if the next site is empty. ASEP has been used to model and analyze numerous multiagent systems with local interactions including the flow of ribosomes along the mRNA strand. In ASEP with periodic boundary conditions a particle that hops from the last site returns to the first one. The mean field approximation of this model is referred to as the ribosome flow model on a ring (RFMR). The RFMR may be used to model both synthetic and endogenous gene expression regimes. We analyze the RFMR using the theory of monotone dynamical systems. We show that it admits a continuum of equilibrium points and that every trajectory converges to an equilibrium point. Furthermore, we show that it entrains to periodic transition rates between the sites. We describe the implications of the analysis results to understanding and engineering cyclic mRNA translation in-vitro and in-vivo. |
1501.02664 | Burak Erman | Burak Erman | Effects of ligand binding upon flexibility of proteins | 6 pages, 3 figures | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Binding of a ligand on a protein changes the flexibility of certain parts of
the protein, which directly affects its function. These changes are not the
same at each point, some parts become more flexible and some others become
stiffer. Here, an equation is derived that gives the stiffness map for
proteins. The model is based on correlations of fluctuations of pairs of points
that need to be evaluated by molecular dynamics simulations. The model is also
cast in terms of the Gaussian Network Model and changes of stiffness upon
dimerization of AKT1 are evaluated as an example.
| [
{
"created": "Mon, 12 Jan 2015 14:40:31 GMT",
"version": "v1"
}
] | 2015-01-13 | [
[
"Erman",
"Burak",
""
]
] | Binding of a ligand on a protein changes the flexibility of certain parts of the protein, which directly affects its function. These changes are not the same at each point, some parts become more flexible and some others become stiffer. Here, an equation is derived that gives the stiffness map for proteins. The model is based on correlations of fluctuations of pairs of points that need to be evaluated by molecular dynamics simulations. The model is also cast in terms of the Gaussian Network Model and changes of stiffness upon dimerization of AKT1 are evaluated as an example. |
1308.0655 | Daniel McGlinn | Daniel J. McGlinn, Xiao Xiao, Ethan P. White | An empirical evaluation of four variants of a universal species-area
relationship | main text: 20 pages, 2 tables, 3 figures | null | null | null | q-bio.QM q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Maximum Entropy Theory of Ecology (METE) predicts a universal
species-area relationship (SAR) that can be fully characterized using only the
total abundance (N) and species richness (S) at a single spatial scale. This
theory has shown promise for characterizing scale dependence in the SAR.
However, there are currently four different approaches to applying METE to
predict the SAR and it is unclear which approach should be used due to a lack
of empirical evaluation. Specifically, METE can be applied recursively or a
non-recursively and can use either a theoretical or observed species-abundance
distribution (SAD). We compared the four different combinations of approaches
using empirical data from 16 datasets containing over 1000 species and 300,000
individual trees and herbs. In general, METE accurately downscaled the SAR
(R^2> 0.94), but the recursive approach consistently under-predicted richness,
and METEs accuracy did not depend strongly on using the observed or predicted
SAD. This suggests that best approach to scaling diversity using METE is to use
a combination of non-recursive scaling and the theoretical abundance
distribution, which allows predictions to be made across a broad range of
spatial scales with only knowledge of the species richness and total abundance
at a single scale.
| [
{
"created": "Sat, 3 Aug 2013 03:15:48 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Nov 2013 23:08:15 GMT",
"version": "v2"
}
] | 2013-11-12 | [
[
"McGlinn",
"Daniel J.",
""
],
[
"Xiao",
"Xiao",
""
],
[
"White",
"Ethan P.",
""
]
] | The Maximum Entropy Theory of Ecology (METE) predicts a universal species-area relationship (SAR) that can be fully characterized using only the total abundance (N) and species richness (S) at a single spatial scale. This theory has shown promise for characterizing scale dependence in the SAR. However, there are currently four different approaches to applying METE to predict the SAR and it is unclear which approach should be used due to a lack of empirical evaluation. Specifically, METE can be applied recursively or a non-recursively and can use either a theoretical or observed species-abundance distribution (SAD). We compared the four different combinations of approaches using empirical data from 16 datasets containing over 1000 species and 300,000 individual trees and herbs. In general, METE accurately downscaled the SAR (R^2> 0.94), but the recursive approach consistently under-predicted richness, and METEs accuracy did not depend strongly on using the observed or predicted SAD. This suggests that best approach to scaling diversity using METE is to use a combination of non-recursive scaling and the theoretical abundance distribution, which allows predictions to be made across a broad range of spatial scales with only knowledge of the species richness and total abundance at a single scale. |
2006.14575 | Subhas Khajanchi Dr. | Subhas Khajanchi, Kankan Sarkar | Forecasting the daily and cumulative number of cases for the COVID-19
pandemic in India | 18 Pages, 7 Figures | null | 10.1063/5.0016240 | null | q-bio.PE physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | The ongoing novel coronavirus epidemic has been announced a pandemic by the
World Health Organization on March 11, 2020, and the Govt. of India has
declared a nationwide lockdown from March 25, 2020, to prevent community
transmission of COVID-19. Due to absence of specific antivirals or vaccine,
mathematical modeling play an important role to better understand the disease
dynamics and designing strategies to control rapidly spreading infectious
diseases. In our study, we developed a new compartmental model that explains
the transmission dynamics of COVID-19. We calibrated our proposed model with
daily COVID-19 data for the four Indian provinces, namely Jharkhand, Gujarat,
Andhra Pradesh, and Chandigarh. We study the qualitative properties of the
model including feasible equilibria and their stability with respect to the
basic reproduction number $\mathcal{R}_0$. The disease-free equilibrium becomes
stable and the endemic equilibrium becomes unstable when the recovery rate of
infected individuals increased but if the disease transmission rate remains
higher then the endemic equilibrium always remain stable. For the estimated
model parameters, $\mathcal{R}_0 >1$ for all the four provinces, which suggests
the significant outbreak of COVID-19. Short-time prediction shows the
increasing trend of daily and cumulative cases of COVID-19 for the four
provinces of India.
| [
{
"created": "Thu, 25 Jun 2020 17:20:34 GMT",
"version": "v1"
}
] | 2020-08-26 | [
[
"Khajanchi",
"Subhas",
""
],
[
"Sarkar",
"Kankan",
""
]
] | The ongoing novel coronavirus epidemic has been announced a pandemic by the World Health Organization on March 11, 2020, and the Govt. of India has declared a nationwide lockdown from March 25, 2020, to prevent community transmission of COVID-19. Due to absence of specific antivirals or vaccine, mathematical modeling play an important role to better understand the disease dynamics and designing strategies to control rapidly spreading infectious diseases. In our study, we developed a new compartmental model that explains the transmission dynamics of COVID-19. We calibrated our proposed model with daily COVID-19 data for the four Indian provinces, namely Jharkhand, Gujarat, Andhra Pradesh, and Chandigarh. We study the qualitative properties of the model including feasible equilibria and their stability with respect to the basic reproduction number $\mathcal{R}_0$. The disease-free equilibrium becomes stable and the endemic equilibrium becomes unstable when the recovery rate of infected individuals increased but if the disease transmission rate remains higher then the endemic equilibrium always remain stable. For the estimated model parameters, $\mathcal{R}_0 >1$ for all the four provinces, which suggests the significant outbreak of COVID-19. Short-time prediction shows the increasing trend of daily and cumulative cases of COVID-19 for the four provinces of India. |
1401.5589 | Stephen Odaibo | Stephen G. Odaibo | The Gabor-Einstein Wavelet: A Model for the Receptive Fields of V1 to MT
Neurons | 40 pages, 13 Figures. We presented a portion of this work in various
parts at the National Medical Association's 111th Annual Convention and
Scientific Assembly in Toronto Ontario, Canada (Jul. 2013); at the 23rd
Annual Washington Retina Symposium in Washington D.C., U.S.A. (Oct. 2013);
and at the Society for Neuroscience's 43rd Annual Meeting in San Diego
California, U.S.A. (Nov. 2013) | null | null | null | q-bio.NC cs.CV physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our visual system is astonishingly efficient at detecting moving objects.
This process is mediated by the neurons which connect the primary visual cortex
(V1) to the middle temporal (MT) area. Interestingly, since Kuffler's
pioneering experiments on retinal ganglion cells, mathematical models have been
vital for advancing our understanding of the receptive fields of visual
neurons. However, existing models were not designed to describe the most
salient attributes of the highly specialized neurons in the V1 to MT motion
processing stream; and they have not been able to do so. Here, we introduce the
Gabor-Einstein wavelet, a new family of functions for representing the
receptive fields of V1 to MT neurons. We show that the way space and time are
mixed in the visual cortex is analogous to the way they are mixed in the
special theory of relativity (STR). Hence we constrained the Gabor-Einstein
model by requiring: (i) relativistic-invariance of the wave carrier, and (ii)
the minimum possible number of parameters. From these two constraints, the sinc
function emerged as a natural descriptor of the wave carrier. The particular
distribution of lowpass to bandpass temporal frequency filtering properties of
V1 to MT neurons (Foster et al 1985; DeAngelis et al 1993b; Hawken et al 1996)
is clearly explained by the Gabor-Einstein basis. Furthermore, it does so in a
manner innately representative of the motion-processing stream's neuronal
hierarchy. Our analysis and computer simulations show that the distribution of
temporal frequency filtering properties along the motion processing stream is a
direct effect of the way the brain jointly encodes space and time. We uncovered
this fundamental link by demonstrating that analogous mathematical structures
underlie STR and joint cortical spacetime encoding. This link will provide new
physiological insights into how the brain represents visual information.
| [
{
"created": "Wed, 22 Jan 2014 08:48:53 GMT",
"version": "v1"
}
] | 2014-01-23 | [
[
"Odaibo",
"Stephen G.",
""
]
] | Our visual system is astonishingly efficient at detecting moving objects. This process is mediated by the neurons which connect the primary visual cortex (V1) to the middle temporal (MT) area. Interestingly, since Kuffler's pioneering experiments on retinal ganglion cells, mathematical models have been vital for advancing our understanding of the receptive fields of visual neurons. However, existing models were not designed to describe the most salient attributes of the highly specialized neurons in the V1 to MT motion processing stream; and they have not been able to do so. Here, we introduce the Gabor-Einstein wavelet, a new family of functions for representing the receptive fields of V1 to MT neurons. We show that the way space and time are mixed in the visual cortex is analogous to the way they are mixed in the special theory of relativity (STR). Hence we constrained the Gabor-Einstein model by requiring: (i) relativistic-invariance of the wave carrier, and (ii) the minimum possible number of parameters. From these two constraints, the sinc function emerged as a natural descriptor of the wave carrier. The particular distribution of lowpass to bandpass temporal frequency filtering properties of V1 to MT neurons (Foster et al 1985; DeAngelis et al 1993b; Hawken et al 1996) is clearly explained by the Gabor-Einstein basis. Furthermore, it does so in a manner innately representative of the motion-processing stream's neuronal hierarchy. Our analysis and computer simulations show that the distribution of temporal frequency filtering properties along the motion processing stream is a direct effect of the way the brain jointly encodes space and time. We uncovered this fundamental link by demonstrating that analogous mathematical structures underlie STR and joint cortical spacetime encoding. This link will provide new physiological insights into how the brain represents visual information. |
2404.08023 | Zeyu Zhang | Zeyu Zhang, Yuanshen Zhao, Jingxian Duan, Yaou Liu, Hairong Zheng,
Dong Liang, Zhenyu Zhang and Zhi-Cheng Li | Pathology-genomic fusion via biologically informed cross-modality graph
learning for survival analysis | null | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The diagnosis and prognosis of cancer are typically based on multi-modal
clinical data, including histology images and genomic data, due to the complex
pathogenesis and high heterogeneity. Despite the advancements in digital
pathology and high-throughput genome sequencing, establishing effective
multi-modal fusion models for survival prediction and revealing the potential
association between histopathology and transcriptomics remains challenging. In
this paper, we propose Pathology-Genome Heterogeneous Graph (PGHG) that
integrates whole slide images (WSI) and bulk RNA-Seq expression data with
heterogeneous graph neural network for cancer survival analysis. The PGHG
consists of biological knowledge-guided representation learning network and
pathology-genome heterogeneous graph. The representation learning network
utilizes the biological prior knowledge of intra-modal and inter-modal data
associations to guide the feature extraction. The node features of each
modality are updated through attention-based graph learning strategy. Unimodal
features and bi-modal fused features are extracted via attention pooling module
and then used for survival prediction. We evaluate the model on low-grade
gliomas, glioblastoma, and kidney renal papillary cell carcinoma datasets from
the Cancer Genome Atlas (TCGA) and the First Affiliated Hospital of Zhengzhou
University (FAHZU). Extensive experimental results demonstrate that the
proposed method outperforms both unimodal and other multi-modal fusion models.
For demonstrating the model interpretability, we also visualize the attention
heatmap of pathological images and utilize integrated gradient algorithm to
identify important tissue structure, biological pathways and key genes.
| [
{
"created": "Thu, 11 Apr 2024 09:07:40 GMT",
"version": "v1"
}
] | 2024-04-15 | [
[
"Zhang",
"Zeyu",
""
],
[
"Zhao",
"Yuanshen",
""
],
[
"Duan",
"Jingxian",
""
],
[
"Liu",
"Yaou",
""
],
[
"Zheng",
"Hairong",
""
],
[
"Liang",
"Dong",
""
],
[
"Zhang",
"Zhenyu",
""
],
[
"Li",
"Zhi-Cheng",
""
]
] | The diagnosis and prognosis of cancer are typically based on multi-modal clinical data, including histology images and genomic data, due to the complex pathogenesis and high heterogeneity. Despite the advancements in digital pathology and high-throughput genome sequencing, establishing effective multi-modal fusion models for survival prediction and revealing the potential association between histopathology and transcriptomics remains challenging. In this paper, we propose Pathology-Genome Heterogeneous Graph (PGHG) that integrates whole slide images (WSI) and bulk RNA-Seq expression data with heterogeneous graph neural network for cancer survival analysis. The PGHG consists of biological knowledge-guided representation learning network and pathology-genome heterogeneous graph. The representation learning network utilizes the biological prior knowledge of intra-modal and inter-modal data associations to guide the feature extraction. The node features of each modality are updated through attention-based graph learning strategy. Unimodal features and bi-modal fused features are extracted via attention pooling module and then used for survival prediction. We evaluate the model on low-grade gliomas, glioblastoma, and kidney renal papillary cell carcinoma datasets from the Cancer Genome Atlas (TCGA) and the First Affiliated Hospital of Zhengzhou University (FAHZU). Extensive experimental results demonstrate that the proposed method outperforms both unimodal and other multi-modal fusion models. For demonstrating the model interpretability, we also visualize the attention heatmap of pathological images and utilize integrated gradient algorithm to identify important tissue structure, biological pathways and key genes. |
1608.03467 | Laurens Michiels Van Kessenich | L. Michiels van Kessenich, L. de Arcangelis and H. J. Herrmann | Synaptic plasticity and neuronal refractory time cause scaling behaviour
of neuronal avalanches | 9 pages, 4 figures, to be published in Scientific Reports | null | null | null | q-bio.NC nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neuronal avalanches measured in vitro and in vivo in different cortical
networks consistently exhibit power law behaviour for the size and duration
distributions with exponents typical for a mean field self-organized branching
process. These exponents are also recovered in neuronal network simulations
implementing various neuronal dynamics on different network topologies. They
can therefore be considered a very robust feature of spontaneous neuronal
activity. Interestingly, this scaling behaviour is also observed on regular
lattices in finite dimensions, which raises the question about the origin of
the mean field behaviour observed experimentally. In this study we provide an
answer to this open question by investigating the effect of activity dependent
plasticity in combination with the neuronal refractory time in a neuronal
network. Results show that the refractory time hinders backward avalanches
forcing a directed propagation. Hebbian plastic adaptation plays the role of
sculpting these directed avalanche patterns into the topology of the network
slowly changing it into a branched structure where loops are marginal.
| [
{
"created": "Wed, 10 Aug 2016 13:37:04 GMT",
"version": "v1"
}
] | 2016-08-12 | [
[
"van Kessenich",
"L. Michiels",
""
],
[
"de Arcangelis",
"L.",
""
],
[
"Herrmann",
"H. J.",
""
]
] | Neuronal avalanches measured in vitro and in vivo in different cortical networks consistently exhibit power law behaviour for the size and duration distributions with exponents typical for a mean field self-organized branching process. These exponents are also recovered in neuronal network simulations implementing various neuronal dynamics on different network topologies. They can therefore be considered a very robust feature of spontaneous neuronal activity. Interestingly, this scaling behaviour is also observed on regular lattices in finite dimensions, which raises the question about the origin of the mean field behaviour observed experimentally. In this study we provide an answer to this open question by investigating the effect of activity dependent plasticity in combination with the neuronal refractory time in a neuronal network. Results show that the refractory time hinders backward avalanches forcing a directed propagation. Hebbian plastic adaptation plays the role of sculpting these directed avalanche patterns into the topology of the network slowly changing it into a branched structure where loops are marginal. |
1904.10394 | Urmimala Dey | Urmimala Dey, Archisman Ghosh, Syed Abbas, A Taraphder and Madhumita
Roy | Cell damage and mitigation in Swiss albino mice: experiment and
modelling | null | Alexandria Engineering Journal 59, 1345-1357 (2020) | 10.1016/j.aej.2020.03.001 | null | q-bio.TO q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chronic exposure to inorganic arsenic is a potential cause of carcinogenesis.
It elicits its potential by generation of ROS, leading to DNA, protein and
lipid damage. Therefore, the deleterious effect of arsenic can be mitigated by
quenching ROS using antioxidants. There is a homology between the protein
coding regions of mice and human. Effect of these alterations in human can be
mimicked in mice. Therefore to understand the underlying mechanism of arsenic
toxicity and its amelioration by black tea, studies have been conducted in mice
model. Long term exposure to iAs leads to tumour growth, which has been found
to be alleviated by black tea. Observations reveal that black tea has two
salutary effects on the growth of tumour: the rate of growth of damaged cells
was appreciably reduced and an early saturation of the level of damage is
achieved. To take the experimental findings further, the experimental data have
been modelled with simple dynamical equations. The curves obtained from
\textit{in vivo} studies have been fitted with the data obtained from the
model. The corresponding steady states and their stabilities are analyzed.
| [
{
"created": "Fri, 5 Apr 2019 17:10:00 GMT",
"version": "v1"
}
] | 2020-06-23 | [
[
"Dey",
"Urmimala",
""
],
[
"Ghosh",
"Archisman",
""
],
[
"Abbas",
"Syed",
""
],
[
"Taraphder",
"A",
""
],
[
"Roy",
"Madhumita",
""
]
] | Chronic exposure to inorganic arsenic is a potential cause of carcinogenesis. It elicits its potential by generation of ROS, leading to DNA, protein and lipid damage. Therefore, the deleterious effect of arsenic can be mitigated by quenching ROS using antioxidants. There is a homology between the protein coding regions of mice and human. Effect of these alterations in human can be mimicked in mice. Therefore to understand the underlying mechanism of arsenic toxicity and its amelioration by black tea, studies have been conducted in mice model. Long term exposure to iAs leads to tumour growth, which has been found to be alleviated by black tea. Observations reveal that black tea has two salutary effects on the growth of tumour: the rate of growth of damaged cells was appreciably reduced and an early saturation of the level of damage is achieved. To take the experimental findings further, the experimental data have been modelled with simple dynamical equations. The curves obtained from \textit{in vivo} studies have been fitted with the data obtained from the model. The corresponding steady states and their stabilities are analyzed. |
1803.09727 | Jie Ren | Jie Ren, Xin Bai, Yang Young Lu, Kujin Tang, Ying Wang, Gesine
Reinert, Fengzhu Sun | Alignment-Free Sequence Analysis and Applications | null | null | null | null | q-bio.QM q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genome and metagenome comparisons based on large amounts of next-generation
sequencing (NGS) data pose significant challenges for alignment-based
approaches due to the huge data size and the relatively short length of the
reads. Alignment-free approaches based on the counts of word patterns in NGS
data do not depend on the complete genome and are generally computationally
efficient. Thus, they contribute significantly to genome and metagenome
comparison. Recently, novel statistical approaches have been developed for the
comparison of both long and shotgun sequences. These approaches have been
applied to many problems including the comparison of gene regulatory regions,
genome sequences, metagenomes, binning contigs in metagenomic data,
identification of virus-host interactions, and detection of horizontal gene
transfers. We provide an updated review of these applications and other related
developments of word-count based approaches for alignment-free sequence
analysis.
| [
{
"created": "Mon, 26 Mar 2018 17:31:11 GMT",
"version": "v1"
}
] | 2018-03-28 | [
[
"Ren",
"Jie",
""
],
[
"Bai",
"Xin",
""
],
[
"Lu",
"Yang Young",
""
],
[
"Tang",
"Kujin",
""
],
[
"Wang",
"Ying",
""
],
[
"Reinert",
"Gesine",
""
],
[
"Sun",
"Fengzhu",
""
]
] | Genome and metagenome comparisons based on large amounts of next-generation sequencing (NGS) data pose significant challenges for alignment-based approaches due to the huge data size and the relatively short length of the reads. Alignment-free approaches based on the counts of word patterns in NGS data do not depend on the complete genome and are generally computationally efficient. Thus, they contribute significantly to genome and metagenome comparison. Recently, novel statistical approaches have been developed for the comparison of both long and shotgun sequences. These approaches have been applied to many problems including the comparison of gene regulatory regions, genome sequences, metagenomes, binning contigs in metagenomic data, identification of virus-host interactions, and detection of horizontal gene transfers. We provide an updated review of these applications and other related developments of word-count based approaches for alignment-free sequence analysis. |
1509.04602 | Aristides Moustakas | Aristides Moustakas and Matthew R. Evans | Regional and temporal characteristics of bovine tuberculosis of cattle
in Great Britain | (in press) Stochastic Environmental Research and Risk Assessment
(2015) | null | null | null | q-bio.PE stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bovine tuberculosis (TB) is a chronic disease in cattle that causes a serious
food security challenge to the agricultural industry in terms of dairy and meat
production. In GB, Scotland has had a risk based surveillance testing policy
under which high risk herds are tested frequently, and in Sept 2009 was
officially declared as TB free. Wales have had an annual or more frequent
testing policy for all cattle herds since Jan 2010, while in England several
herds are still tested every 4 years except some high TB prevalence areas where
annual testing is applied. Time series analysis using publicly available data
for total tests on herds, total cattle slaughtered, new herd incidents, and
herds not TB free, were analysed globally for GB and locally for the
constituent regions of Wales, Scotland, West, North, and East England. After
detecting trends over time, underlying regional differences were compared with
the testing policies in the region. Total cattle slaughtered are decreasing in
Wales, Scotland and West England, but increasing in the North and East English
regions. New herd incidents, i.e., disease incidence, are decreasing in Wales,
Scotland, West English region, but increasing in North and East English
regions. Herds not TB free, are increasing in West, North, and East English
regions, while they are decreasing in Wales and Scotland. Total cattle
slaughtered were positively correlated with total tests in the West, North, and
East English regions, with high slopes of regression. There was no correlation
between total cattle slaughtered and total tests on herds in Wales indicating
that herds are tested frequent enough in order to detect all likely cases and
so control TB. The main conclusion of the analysis conducted here is that more
frequent testing is leading to lower TB infections in cattle both in terms of
TB prevalence as well as TB incidence.
| [
{
"created": "Tue, 15 Sep 2015 15:29:41 GMT",
"version": "v1"
}
] | 2015-09-16 | [
[
"Moustakas",
"Aristides",
""
],
[
"Evans",
"Matthew R.",
""
]
] | Bovine tuberculosis (TB) is a chronic disease in cattle that causes a serious food security challenge to the agricultural industry in terms of dairy and meat production. In GB, Scotland has had a risk based surveillance testing policy under which high risk herds are tested frequently, and in Sept 2009 was officially declared as TB free. Wales have had an annual or more frequent testing policy for all cattle herds since Jan 2010, while in England several herds are still tested every 4 years except some high TB prevalence areas where annual testing is applied. Time series analysis using publicly available data for total tests on herds, total cattle slaughtered, new herd incidents, and herds not TB free, were analysed globally for GB and locally for the constituent regions of Wales, Scotland, West, North, and East England. After detecting trends over time, underlying regional differences were compared with the testing policies in the region. Total cattle slaughtered are decreasing in Wales, Scotland and West England, but increasing in the North and East English regions. New herd incidents, i.e., disease incidence, are decreasing in Wales, Scotland, West English region, but increasing in North and East English regions. Herds not TB free, are increasing in West, North, and East English regions, while they are decreasing in Wales and Scotland. Total cattle slaughtered were positively correlated with total tests in the West, North, and East English regions, with high slopes of regression. There was no correlation between total cattle slaughtered and total tests on herds in Wales indicating that herds are tested frequent enough in order to detect all likely cases and so control TB. The main conclusion of the analysis conducted here is that more frequent testing is leading to lower TB infections in cattle both in terms of TB prevalence as well as TB incidence. |
2004.01710 | Andreas Kamilaris | Eleni Kamilari, Dimitrios A. Anagnostopoulos, Photis Papademas,
Andreas Kamilaris, Dimitris Tsaltas | Characterizing Halloumi cheese bacterial communities through metagenomic
analysis | null | LWT (2020): 109298 | 10.1016/j.lwt.2020.109298 | null | q-bio.GN cs.CY | http://creativecommons.org/licenses/by/4.0/ | Halloumi is a semi hard cheese produced in Cyprus for centuries and its
popularity has significantly risen over the past years. High throughput
sequencing (HTS) was applied in the present research to characterize
traditional Cyprus Halloumi bacterial diversity. Eighteen samples made by
different milk mixtures and produced in different areas of the country were
analyzed, to reveal that Halloumi microbiome was mainly comprised by lactic
acid bacteria (LAB), including Lactobacillus, Leuconostoc, and Pediococcus, as
well as halophilic bacteria, such as Marinilactibacillus and Halomonas.
Additionally, spore forming bacteria and spoilage bacteria, were also detected.
Halloumi produced with the traditional method, had significantly richer
bacterial diversity compared to Halloumi produced with the industrial method.
Variations detected among the bacterial communities highlight the contribution
of the initial microbiome that existed in milk and survived pasteurization, as
well as factors associated with Halloumi manufacturing conditions, in the final
microbiota composition shaping. Identification and characterization of Halloumi
microbiome provides an additional, useful tool to characterize its typicity and
probably safeguard it from fraud products that may appear in the market. Also,
it may assist producers to further improve its quality and guarantee consumers
safety.
| [
{
"created": "Fri, 3 Apr 2020 14:09:26 GMT",
"version": "v1"
}
] | 2020-04-07 | [
[
"Kamilari",
"Eleni",
""
],
[
"Anagnostopoulos",
"Dimitrios A.",
""
],
[
"Papademas",
"Photis",
""
],
[
"Kamilaris",
"Andreas",
""
],
[
"Tsaltas",
"Dimitris",
""
]
] | Halloumi is a semi hard cheese produced in Cyprus for centuries and its popularity has significantly risen over the past years. High throughput sequencing (HTS) was applied in the present research to characterize traditional Cyprus Halloumi bacterial diversity. Eighteen samples made by different milk mixtures and produced in different areas of the country were analyzed, to reveal that Halloumi microbiome was mainly comprised by lactic acid bacteria (LAB), including Lactobacillus, Leuconostoc, and Pediococcus, as well as halophilic bacteria, such as Marinilactibacillus and Halomonas. Additionally, spore forming bacteria and spoilage bacteria, were also detected. Halloumi produced with the traditional method, had significantly richer bacterial diversity compared to Halloumi produced with the industrial method. Variations detected among the bacterial communities highlight the contribution of the initial microbiome that existed in milk and survived pasteurization, as well as factors associated with Halloumi manufacturing conditions, in the final microbiota composition shaping. Identification and characterization of Halloumi microbiome provides an additional, useful tool to characterize its typicity and probably safeguard it from fraud products that may appear in the market. Also, it may assist producers to further improve its quality and guarantee consumers safety. |
1809.03961 | Santiago Schnell | Justin Eilertsen, Wylie Stroberg and Santiago Schnell | Characteristic, completion or matching timescales? An analysis of
temporary boundaries in enzyme kinetics | 35 pages, 11 figures | Journal of Theoretical Biology, Volume 481, 21 November 2019,
Pages 28-43 | 10.1016/j.jtbi.2019.01.005 | null | q-bio.QM math.DS | http://creativecommons.org/licenses/by/4.0/ | Scaling analysis exploiting timescale separation has been one of the most
important techniques in the quantitative analysis of nonlinear dynamical
systems in mathematical and theoretical biology. In the case of enzyme
catalyzed reactions, it is often overlooked that the characteristic timescales
used for the scaling the rate equations are not ideal for determining when
concentrations and reaction rates reach their maximum values. In this work, we
first illustrate this point by considering the classic example of the
single-enzyme, single-substrate Michaelis--Menten reaction mechanism. We then
extend this analysis to a more complicated reaction mechanism, the auxiliary
enzyme reaction, in which a substrate is converted to product in two sequential
enzyme-catalyzed reactions. In this case, depending on the ordering of the
relevant timescales, several dynamic regimes can emerge. In addition to the
characteristic timescales for these regimes, we derive matching timescales that
determine (approximately) when the transitions from initial fast transient to
steady-state kinetics occurs. The approach presented here is applicable to a
wide range of singular perturbation problems in nonlinear dynamical systems.
| [
{
"created": "Tue, 11 Sep 2018 15:10:10 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Jan 2019 19:53:12 GMT",
"version": "v2"
}
] | 2023-03-21 | [
[
"Eilertsen",
"Justin",
""
],
[
"Stroberg",
"Wylie",
""
],
[
"Schnell",
"Santiago",
""
]
] | Scaling analysis exploiting timescale separation has been one of the most important techniques in the quantitative analysis of nonlinear dynamical systems in mathematical and theoretical biology. In the case of enzyme catalyzed reactions, it is often overlooked that the characteristic timescales used for the scaling the rate equations are not ideal for determining when concentrations and reaction rates reach their maximum values. In this work, we first illustrate this point by considering the classic example of the single-enzyme, single-substrate Michaelis--Menten reaction mechanism. We then extend this analysis to a more complicated reaction mechanism, the auxiliary enzyme reaction, in which a substrate is converted to product in two sequential enzyme-catalyzed reactions. In this case, depending on the ordering of the relevant timescales, several dynamic regimes can emerge. In addition to the characteristic timescales for these regimes, we derive matching timescales that determine (approximately) when the transitions from initial fast transient to steady-state kinetics occurs. The approach presented here is applicable to a wide range of singular perturbation problems in nonlinear dynamical systems. |
q-bio/0402014 | Alan McKane | Christopher Quince, Paul Higgs and Alan McKane | Topological structure and interaction strengths in model food webs | 43 pages, 15 figures | null | null | null | q-bio.PE cond-mat.stat-mech | null | We report the results of carrying out a large number of simulations on a
coevolutionary model of multispecies communities. A wide range of parameter
values were investigated which allowed a rather complete picture of the change
in behaviour of the model as these parameters were varied to be built up. Our
main interest was in the nature of the community food webs constructed via the
simulations. We identify the range of parameter values which give rise to
realistic food webs and give arguments which allow some of the structure which
is found to be understood in an intuitive way. Since the webs are evolved
according to the rules of the model, the strengths of the predator-prey links
are not determined a priori, and emerge from the process of constructing the
web. We measure the distribution of these link strengths, and find that there
are a large number of weak links, in agreement with recent suggestions. We also
review some of the data on food webs available in the literature, and make some
tentative comparisons with our results. The difficulties of making such
comparisons and the possible future developments of the model are also briefly
discussed.
| [
{
"created": "Sat, 7 Feb 2004 14:45:15 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Quince",
"Christopher",
""
],
[
"Higgs",
"Paul",
""
],
[
"McKane",
"Alan",
""
]
] | We report the results of carrying out a large number of simulations on a coevolutionary model of multispecies communities. A wide range of parameter values were investigated which allowed a rather complete picture of the change in behaviour of the model as these parameters were varied to be built up. Our main interest was in the nature of the community food webs constructed via the simulations. We identify the range of parameter values which give rise to realistic food webs and give arguments which allow some of the structure which is found to be understood in an intuitive way. Since the webs are evolved according to the rules of the model, the strengths of the predator-prey links are not determined a priori, and emerge from the process of constructing the web. We measure the distribution of these link strengths, and find that there are a large number of weak links, in agreement with recent suggestions. We also review some of the data on food webs available in the literature, and make some tentative comparisons with our results. The difficulties of making such comparisons and the possible future developments of the model are also briefly discussed. |
0706.0163 | Alexander K. Vidybida | Alexander K. Vidybida | Output Stream of Binding Neuron with Feedback | Version #1: 4 pages, 5 figures, manuscript submitted to Biological
Cybernetics. Version #2 (this version): added 3 pages of new text with
additional analytical and numerical calculations, 2 more figures, 11 more
references, added Discussion section | Eur. Phys. J. B 65, 577-584 (2008); Eur. Phys. J. B 69, 313 (2009) | 10.1140/epjb/e2008-00360-1 | null | q-bio.NC q-bio.OT | null | The binding neuron model is inspired by numerical simulation of
Hodgkin-Huxley-type point neuron, as well as by the leaky integrate-and-fire
model. In the binding neuron, the trace of an input is remembered for a fixed
period of time after which it disappears completely. This is in the contrast
with the above two models, where the postsynaptic potentials decay
exponentially and can be forgotten only after triggering. The finiteness of
memory in the binding neuron allows one to construct fast recurrent networks
for computer modeling. Recently, the finiteness is utilized for exact
mathematical description of the output stochastic process if the binding neuron
is driven with the Poissonian input stream. In this paper, the simplest
networking is considered for binding neuron. Namely, it is expected that every
output spike of single neuron is immediately fed into its input. For this
construction, externally fed with Poissonian stream, the output stream is
characterized in terms of interspike interval probability density distribution
if the binding neuron has threshold 2. For higher thresholds, the distribution
is calculated numerically. The distributions are compared with those found for
binding neuron without feedback, and for leaky integrator. Sample distributions
for leaky integrator with feedback are calculated numerically as well. It is
oncluded that even the simplest networking can radically alter spikng
statistics. Information condensation at the level of single neuron is
discussed.
| [
{
"created": "Fri, 1 Jun 2007 14:20:19 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Sep 2007 15:00:26 GMT",
"version": "v2"
}
] | 2011-07-20 | [
[
"Vidybida",
"Alexander K.",
""
]
] | The binding neuron model is inspired by numerical simulation of Hodgkin-Huxley-type point neuron, as well as by the leaky integrate-and-fire model. In the binding neuron, the trace of an input is remembered for a fixed period of time after which it disappears completely. This is in the contrast with the above two models, where the postsynaptic potentials decay exponentially and can be forgotten only after triggering. The finiteness of memory in the binding neuron allows one to construct fast recurrent networks for computer modeling. Recently, the finiteness is utilized for exact mathematical description of the output stochastic process if the binding neuron is driven with the Poissonian input stream. In this paper, the simplest networking is considered for binding neuron. Namely, it is expected that every output spike of single neuron is immediately fed into its input. For this construction, externally fed with Poissonian stream, the output stream is characterized in terms of interspike interval probability density distribution if the binding neuron has threshold 2. For higher thresholds, the distribution is calculated numerically. The distributions are compared with those found for binding neuron without feedback, and for leaky integrator. Sample distributions for leaky integrator with feedback are calculated numerically as well. It is oncluded that even the simplest networking can radically alter spikng statistics. Information condensation at the level of single neuron is discussed. |
1211.6611 | Marcelo Briones | Danielle C. F. Silva, Richard C. Silva, Renata C. Ferreira and Marcelo
R. S. Briones | Examining marginal sequence similarities between bacterial type III
secretion system and Trypanosoma cruzi surface proteins: Horizontal gene
transfer or convergent evolution? | 40 pages, 9 figures, 6 tables | null | null | null | q-bio.PE q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The cell invasion mechanism of Trypanosoma cruzi has similarities with some
intracellular bacterial taxa especially regarding calcium mobilization. This
mechanism is not observed in other trypanosomatids, suggesting that the
molecules involved in this type of cell invasion were a product of (1) acquired
by horizontal gene transfer; (2) secondary loss in the other trypanosomatid
lineages of the mechanism inherited since the bifurcation Bacteria-Neomura (1.9
billion to 900 million years ago) or (3) de novo evolution from non-homologous
proteins via convergent evolution. Similar to T. cruzi, several bacterial
genera require increased host cell cytosolic calcium for intracellular
invasion. Among intracellular bacteria, the mechanism of host cell invasion of
genus Salmonella is the most similar to T. cruzi. The invasion of Salmonella
occurs by contact with the host's cell surface and is mediated by the type III
secretion system (T3SS) that promotes the contact-dependent translocation of
effector proteins directly into host's cell cytoplasm. Here we provide evidence
of distant sequence similarities and structurally conserved domains between T.
cruzi and Salmonella spp T3SS proteins. Exhaustive database searches were
directed to a wide range of intracellular bacteria and trypanosomatids,
exploring sequence patterns for comparison of structural similarities and
Bayesian phylogenies. Based on our data we hypothesize that T. cruzi acquired
genes for calcium mobilization mediated invasion by ancient horizontal gene
transfer from ancestral Salmonella lineages.
| [
{
"created": "Wed, 28 Nov 2012 14:35:37 GMT",
"version": "v1"
}
] | 2012-11-29 | [
[
"Silva",
"Danielle C. F.",
""
],
[
"Silva",
"Richard C.",
""
],
[
"Ferreira",
"Renata C.",
""
],
[
"Briones",
"Marcelo R. S.",
""
]
] | The cell invasion mechanism of Trypanosoma cruzi has similarities with some intracellular bacterial taxa especially regarding calcium mobilization. This mechanism is not observed in other trypanosomatids, suggesting that the molecules involved in this type of cell invasion were a product of (1) acquired by horizontal gene transfer; (2) secondary loss in the other trypanosomatid lineages of the mechanism inherited since the bifurcation Bacteria-Neomura (1.9 billion to 900 million years ago) or (3) de novo evolution from non-homologous proteins via convergent evolution. Similar to T. cruzi, several bacterial genera require increased host cell cytosolic calcium for intracellular invasion. Among intracellular bacteria, the mechanism of host cell invasion of genus Salmonella is the most similar to T. cruzi. The invasion of Salmonella occurs by contact with the host's cell surface and is mediated by the type III secretion system (T3SS) that promotes the contact-dependent translocation of effector proteins directly into host's cell cytoplasm. Here we provide evidence of distant sequence similarities and structurally conserved domains between T. cruzi and Salmonella spp T3SS proteins. Exhaustive database searches were directed to a wide range of intracellular bacteria and trypanosomatids, exploring sequence patterns for comparison of structural similarities and Bayesian phylogenies. Based on our data we hypothesize that T. cruzi acquired genes for calcium mobilization mediated invasion by ancient horizontal gene transfer from ancestral Salmonella lineages. |
2211.10422 | Sam Sinai | Lauren Berk Wheelock, Stephen Malina, Jeffrey Gerold, Sam Sinai | Forecasting labels under distribution-shift for machine-guided sequence
design | 15 pages, 3 figures, to appear in MLCB-PMLR proceedings, oral
presentation at MLCB 2022 and LMLR 2022 | null | null | null | q-bio.QM cs.LG math.OC q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | The ability to design and optimize biological sequences with specific
functionalities would unlock enormous value in technology and healthcare. In
recent years, machine learning-guided sequence design has progressed this goal
significantly, though validating designed sequences in the lab or clinic takes
many months and substantial labor. It is therefore valuable to assess the
likelihood that a designed set contains sequences of the desired quality (which
often lies outside the label distribution in our training data) before
committing resources to an experiment. Forecasting, a prominent concept in many
domains where feedback can be delayed (e.g. elections), has not been used or
studied in the context of sequence design. Here we propose a method to guide
decision-making that forecasts the performance of high-throughput libraries
(e.g. containing $10^5$ unique variants) based on estimates provided by models,
providing a posterior for the distribution of labels in the library. We show
that our method outperforms baselines that naively use model scores to estimate
library performance, which are the only tool available today for this purpose.
| [
{
"created": "Fri, 18 Nov 2022 18:35:50 GMT",
"version": "v1"
}
] | 2022-11-21 | [
[
"Wheelock",
"Lauren Berk",
""
],
[
"Malina",
"Stephen",
""
],
[
"Gerold",
"Jeffrey",
""
],
[
"Sinai",
"Sam",
""
]
] | The ability to design and optimize biological sequences with specific functionalities would unlock enormous value in technology and healthcare. In recent years, machine learning-guided sequence design has progressed this goal significantly, though validating designed sequences in the lab or clinic takes many months and substantial labor. It is therefore valuable to assess the likelihood that a designed set contains sequences of the desired quality (which often lies outside the label distribution in our training data) before committing resources to an experiment. Forecasting, a prominent concept in many domains where feedback can be delayed (e.g. elections), has not been used or studied in the context of sequence design. Here we propose a method to guide decision-making that forecasts the performance of high-throughput libraries (e.g. containing $10^5$ unique variants) based on estimates provided by models, providing a posterior for the distribution of labels in the library. We show that our method outperforms baselines that naively use model scores to estimate library performance, which are the only tool available today for this purpose. |
1508.02309 | Keith Lidke | Peter K. Relich, Mark J. Olah, Patrick J. Cutler, Keith A. Lidke | Estimation of the Diffusion Constant from Intermittent Trajectories with
Variable Position Uncertainties | 2 figures: 6 plots | Phys. Rev. E 93, 042401 (2016) | 10.1103/PhysRevE.93.042401 | null | q-bio.SC | http://creativecommons.org/licenses/by/4.0/ | The movement of a particle described by Brownian motion is quantified by a
single parameter, $D$, the diffusion constant. The estimation of $D$ from a
discrete sequence of noisy observations is a fundamental problem in biological
single particle tracking experiments since it can report on the environment
and/or the state of the particle itself via hydrodynamic radius. Here we
present a method to estimate $D$ that takes into account several effects that
occur in practice, that are important for correct estimation of $D$, and that
have hitherto not been combined together for estimation of $D$. These effects
are motion blur from finite integration time of the camera, intermittent
trajectories, and time-dependent localization uncertainty. Our estimation
procedure, a maximum likelihood estimation, follows directly from the
likelihood expression for a discretely observed Brownian trajectory that
explicitly includes these effects. The manuscript begins with the formulation
of the likelihood expression and then presents three methods to find the exact
solution. Each method has its own advantages in either computational
robustness, theoretical insight, or the estimation of hidden variables. We then
compare our estimator to previously published estimators using a squared log
loss function to demonstrate the benefit of including these effects.
| [
{
"created": "Mon, 10 Aug 2015 16:22:56 GMT",
"version": "v1"
},
{
"created": "Fri, 21 Aug 2015 11:46:32 GMT",
"version": "v2"
}
] | 2016-04-13 | [
[
"Relich",
"Peter K.",
""
],
[
"Olah",
"Mark J.",
""
],
[
"Cutler",
"Patrick J.",
""
],
[
"Lidke",
"Keith A.",
""
]
] | The movement of a particle described by Brownian motion is quantified by a single parameter, $D$, the diffusion constant. The estimation of $D$ from a discrete sequence of noisy observations is a fundamental problem in biological single particle tracking experiments since it can report on the environment and/or the state of the particle itself via hydrodynamic radius. Here we present a method to estimate $D$ that takes into account several effects that occur in practice, that are important for correct estimation of $D$, and that have hitherto not been combined together for estimation of $D$. These effects are motion blur from finite integration time of the camera, intermittent trajectories, and time-dependent localization uncertainty. Our estimation procedure, a maximum likelihood estimation, follows directly from the likelihood expression for a discretely observed Brownian trajectory that explicitly includes these effects. The manuscript begins with the formulation of the likelihood expression and then presents three methods to find the exact solution. Each method has its own advantages in either computational robustness, theoretical insight, or the estimation of hidden variables. We then compare our estimator to previously published estimators using a squared log loss function to demonstrate the benefit of including these effects. |
2109.03183 | Katharina Huber | Andrew Francis and Katharina T. Huber and Vincent Moulton and Taoyang
Wu | Encoding and ordering X-cactuses | null | null | null | null | q-bio.PE math.CO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Phylogenetic networks are a generalization of evolutionary or phylogenetic
trees that are commonly used to represent the evolution of species which cross
with one another. A special type of phylogenetic network is an {\em
$X$-cactus}, which is essentially a cactus graph in which all vertices with
degree less than three are labelled by at least one element from a set $X$ of
species. In this paper, we present a way to {\em encode} $X$-cactuses in terms
of certain collections of partitions of $X$ that naturally arise from
$X$-cactuses. Using this encoding, we also introduce a partial order on the set
of $X$-cactuses (up to isomorphism), and derive some structural properties of
the resulting partially ordered set. This includes an analysis of some
properties of its least upper and greatest lower bounds. Our results not only
extend some fundamental properties of phylogenetic trees to $X$-cactuses, but
also provides a new approach to solving topical problems in phylogenetic
network theory such as deriving consensus networks.
| [
{
"created": "Tue, 7 Sep 2021 16:28:08 GMT",
"version": "v1"
}
] | 2021-09-08 | [
[
"Francis",
"Andrew",
""
],
[
"Huber",
"Katharina T.",
""
],
[
"Moulton",
"Vincent",
""
],
[
"Wu",
"Taoyang",
""
]
] | Phylogenetic networks are a generalization of evolutionary or phylogenetic trees that are commonly used to represent the evolution of species which cross with one another. A special type of phylogenetic network is an {\em $X$-cactus}, which is essentially a cactus graph in which all vertices with degree less than three are labelled by at least one element from a set $X$ of species. In this paper, we present a way to {\em encode} $X$-cactuses in terms of certain collections of partitions of $X$ that naturally arise from $X$-cactuses. Using this encoding, we also introduce a partial order on the set of $X$-cactuses (up to isomorphism), and derive some structural properties of the resulting partially ordered set. This includes an analysis of some properties of its least upper and greatest lower bounds. Our results not only extend some fundamental properties of phylogenetic trees to $X$-cactuses, but also provides a new approach to solving topical problems in phylogenetic network theory such as deriving consensus networks. |
0908.2827 | Nicola Fameli | Nicola Fameli, Kuo-Hsing Kuo, Cornelis van Breemen | A model for the generation of localized transient Na+ elevations in
vascular smooth muscle | 16 pages, 9 figures; an abridged version submitted for publication | Biochem Biophys Res Commun 389, 461-465 (2009) | 10.1016/j.bbrc.2009.08.166 | null | q-bio.QM q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a stochastic computational model to study the mechanism of
signalling between a source and a target ionic transporter, both localized on
the plasma membrane (PM) and in intracellular nanometre-scale subplasmalemmal
signalling compartments comprising the PM, the sarcoplasmic reticulum (SR),
Ca2+ and Na+ transporters, and the intervening cytosol. We refer to these
compartments, sometimes called junctions, as cytoplasmic nanospaces or
nanodomains. In the chain of events leading to Ca2+ influx for SR reloading
during asynchronous Ca2+ waves in vascular smooth muscle (VSM), the physical
and functional link between non-selective cation channels (NSCC) and Na+/Ca2+
exchangers (NCX) needs to be elucidated in view of two recent findings: the
identification of the transient receptor potential canonical channel 6 (TRPC6)
as a crucial NSCC in VSM cells and the observation of localized cytosolic [Na+]
transients following purinergic stimulation of these cells. Having previously
helped clarify the Ca2+ signalling step between NCX and SERCA behind SR Ca2+
refilling, this quantitative approach now allows us to model the upstream
linkage of NSCC and NCX. We have implemented a random walk (RW) Monte Carlo
(MC) model with simulations mimicking a Na+ diffusion process originating at
the NSCC within PM-SR junctions. The model calculates the average [Na+] in the
nanospace and also produces [Na+] profiles as a function of distance from the
Na+ source. Our results highlight the necessity of a strategic juxtaposition of
the relevant signalling channels as well as other physical structures within
the nanospaces to permit adequate [Na+] build-up to provoke NCX reversal and
Ca2+ influx to refill the SR.
| [
{
"created": "Wed, 19 Aug 2009 22:55:49 GMT",
"version": "v1"
}
] | 2009-10-22 | [
[
"Fameli",
"Nicola",
""
],
[
"Kuo",
"Kuo-Hsing",
""
],
[
"van Breemen",
"Cornelis",
""
]
] | We present a stochastic computational model to study the mechanism of signalling between a source and a target ionic transporter, both localized on the plasma membrane (PM) and in intracellular nanometre-scale subplasmalemmal signalling compartments comprising the PM, the sarcoplasmic reticulum (SR), Ca2+ and Na+ transporters, and the intervening cytosol. We refer to these compartments, sometimes called junctions, as cytoplasmic nanospaces or nanodomains. In the chain of events leading to Ca2+ influx for SR reloading during asynchronous Ca2+ waves in vascular smooth muscle (VSM), the physical and functional link between non-selective cation channels (NSCC) and Na+/Ca2+ exchangers (NCX) needs to be elucidated in view of two recent findings: the identification of the transient receptor potential canonical channel 6 (TRPC6) as a crucial NSCC in VSM cells and the observation of localized cytosolic [Na+] transients following purinergic stimulation of these cells. Having previously helped clarify the Ca2+ signalling step between NCX and SERCA behind SR Ca2+ refilling, this quantitative approach now allows us to model the upstream linkage of NSCC and NCX. We have implemented a random walk (RW) Monte Carlo (MC) model with simulations mimicking a Na+ diffusion process originating at the NSCC within PM-SR junctions. The model calculates the average [Na+] in the nanospace and also produces [Na+] profiles as a function of distance from the Na+ source. Our results highlight the necessity of a strategic juxtaposition of the relevant signalling channels as well as other physical structures within the nanospaces to permit adequate [Na+] build-up to provoke NCX reversal and Ca2+ influx to refill the SR. |
1104.3783 | Aldo Di Biasio | Aldo Di Biasio, Elena Agliari, Adriano Barra and Raffaella Burioni | Mean-field cooperativity in chemical kinetics | 25 pages, 4 figures | Theoretical Chemistry Accounts (2012) | 10.1007/s00214-012-1104-3 | null | q-bio.QM cond-mat.stat-mech physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider cooperative reactions and we study the effects of the interaction
strength among the system components on the reaction rate, hence realizing a
connection between microscopic and macroscopic observables. Our approach is
based on statistical mechanics models and it is developed analytically via
mean-field techniques. First of all, we show that, when the coupling strength
is set positive, the model is able to consistently recover all the various
cooperative measures previously introduced, hence obtaining a single unifying
framework. Furthermore, we introduce a criterion to discriminate between weak
and strong cooperativity, based on a measure of "susceptibility". We also
properly extend the model in order to account for multiple attachments
phenomena: this is realized by incorporating within the model $p$-body
interactions, whose non-trivial cooperative capability is investigated too.
| [
{
"created": "Tue, 19 Apr 2011 15:45:35 GMT",
"version": "v1"
}
] | 2012-06-07 | [
[
"Di Biasio",
"Aldo",
""
],
[
"Agliari",
"Elena",
""
],
[
"Barra",
"Adriano",
""
],
[
"Burioni",
"Raffaella",
""
]
] | We consider cooperative reactions and we study the effects of the interaction strength among the system components on the reaction rate, hence realizing a connection between microscopic and macroscopic observables. Our approach is based on statistical mechanics models and it is developed analytically via mean-field techniques. First of all, we show that, when the coupling strength is set positive, the model is able to consistently recover all the various cooperative measures previously introduced, hence obtaining a single unifying framework. Furthermore, we introduce a criterion to discriminate between weak and strong cooperativity, based on a measure of "susceptibility". We also properly extend the model in order to account for multiple attachments phenomena: this is realized by incorporating within the model $p$-body interactions, whose non-trivial cooperative capability is investigated too. |
2011.08902 | Marisol Salgado-Albarran | Marisol Salgado-Albarran, Erick I. Navarro-Delgado, Aylin Del
Moral-Morales, Nicolas Alcaraz, Jan Baumbach, Rodrigo Gonzalez-Barrios,
Ernesto Soto-Reyes | Comparative transcriptome analysis reveals key epigenetic targets in
SARS-CoV-2 infection | 33 pages, 2 tables, 5 figures, 4 supplementary figures | null | 10.1038/s41540-021-00181-x | null | q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | COVID-19 is an infection caused by SARS-CoV-2 (Severe Acute Respiratory
Syndrome coronavirus 2), which has caused a global outbreak. Current research
efforts are focused on the understanding of the molecular mechanisms involved
in SARS-CoV-2 infection in order to propose drug-based therapeutic options.
Transcriptional changes due to epigenetic regulation are key host cell
responses to viral infection and have been studied in SARS-CoV and MERS-CoV;
however, such changes are not fully described for SARS-CoV-2. In this study, we
analyzed multiple transcriptomes obtained from cell lines infected with
MERS-CoV, SARS-CoV and SARS-CoV-2, and from COVID-19 patient-derived samples.
Using integrative analyses of gene co-expression networks and de-novo pathway
enrichment, we characterize different gene modules and protein pathways
enriched with Transcription Factors or Epifactors relevant for SARS-CoV-2
infection. We identified EP300, MOV10, RELA and TRIM25 as top candidates, and
more than 60 additional proteins involved in the epigenetic response during
viral infection that have therapeutic potential. Our results show that
targeting the epigenetic machinery could be a feasible alternative to treat
COVID-19.
| [
{
"created": "Tue, 17 Nov 2020 19:40:19 GMT",
"version": "v1"
}
] | 2021-05-27 | [
[
"Salgado-Albarran",
"Marisol",
""
],
[
"Navarro-Delgado",
"Erick I.",
""
],
[
"Del Moral-Morales",
"Aylin",
""
],
[
"Alcaraz",
"Nicolas",
""
],
[
"Baumbach",
"Jan",
""
],
[
"Gonzalez-Barrios",
"Rodrigo",
""
],
[
"Soto-Reyes",
"Ernesto",
""
]
] | COVID-19 is an infection caused by SARS-CoV-2 (Severe Acute Respiratory Syndrome coronavirus 2), which has caused a global outbreak. Current research efforts are focused on the understanding of the molecular mechanisms involved in SARS-CoV-2 infection in order to propose drug-based therapeutic options. Transcriptional changes due to epigenetic regulation are key host cell responses to viral infection and have been studied in SARS-CoV and MERS-CoV; however, such changes are not fully described for SARS-CoV-2. In this study, we analyzed multiple transcriptomes obtained from cell lines infected with MERS-CoV, SARS-CoV and SARS-CoV-2, and from COVID-19 patient-derived samples. Using integrative analyses of gene co-expression networks and de-novo pathway enrichment, we characterize different gene modules and protein pathways enriched with Transcription Factors or Epifactors relevant for SARS-CoV-2 infection. We identified EP300, MOV10, RELA and TRIM25 as top candidates, and more than 60 additional proteins involved in the epigenetic response during viral infection that have therapeutic potential. Our results show that targeting the epigenetic machinery could be a feasible alternative to treat COVID-19. |
1604.06713 | Luca Ferretti | Luca Ferretti, Alexander Klassmann, Emanuele Raineri, Thomas Wiehe,
Sebastian E. Ramos-Onsins, Guillaume Achaz | The expected neutral frequency spectrum of linked sites | 26 pages, 5 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an exact, closed expression for the expected neutral Site
Frequency Spectrum for two neutral sites, 2-SFS, without recombination. This
spectrum is the immediate extension of the well known single site $\theta/f$
neutral SFS. Similar formulae are also provided for the case of the expected
SFS of sites that are linked to a focal neutral mutation of known frequency.
Formulae for finite samples are obtained by coalescent methods and remarkably
simple expressions are derived for the SFS of a large population, which are
also solutions of the multi-allelic Kolmogorov equations. Besides the general
interest of these new spectra, they relate to interesting biological cases such
as structural variants and introgressions. As an example, we present the
expected neutral frequency spectrum of regions with a chromosomal inversion.
| [
{
"created": "Fri, 22 Apr 2016 15:28:33 GMT",
"version": "v1"
},
{
"created": "Mon, 9 Jan 2017 19:22:48 GMT",
"version": "v2"
}
] | 2017-01-11 | [
[
"Ferretti",
"Luca",
""
],
[
"Klassmann",
"Alexander",
""
],
[
"Raineri",
"Emanuele",
""
],
[
"Wiehe",
"Thomas",
""
],
[
"Ramos-Onsins",
"Sebastian E.",
""
],
[
"Achaz",
"Guillaume",
""
]
] | We present an exact, closed expression for the expected neutral Site Frequency Spectrum for two neutral sites, 2-SFS, without recombination. This spectrum is the immediate extension of the well known single site $\theta/f$ neutral SFS. Similar formulae are also provided for the case of the expected SFS of sites that are linked to a focal neutral mutation of known frequency. Formulae for finite samples are obtained by coalescent methods and remarkably simple expressions are derived for the SFS of a large population, which are also solutions of the multi-allelic Kolmogorov equations. Besides the general interest of these new spectra, they relate to interesting biological cases such as structural variants and introgressions. As an example, we present the expected neutral frequency spectrum of regions with a chromosomal inversion. |
0912.3057 | Maurizio De Pitta' | Maurizio De Pitta` (1), Mati Goldberg (1), Vladislav Volman (2 and 3),
Hugues Berry (4) and Eshel Ben-Jacob (1 and 2) ((1) School of Physics and
Astronomy, Tel Aviv University, Israel, (2) Center for Theoretical Biological
Physics, UCSD, La Jolla, CA, USA, (3) Computational Neurobiology Lab, The
Salk Institute, La Jolla, CA, USA, (4) Project-Team Alchemy, INRIA Saclay,
Orsay, France) | Glutamate regulation of calcium and IP3 oscillating and pulsating
dynamics in astrocytes | 42 pages, 16 figures, 1 table. Figure filenames mirror figure order
in the paper. Ending "S" in figure filenames stands for "Supplementary
Figure". This article was selected by the Faculty of 1000 Biology: "Genevieve
Dupont: Faculty of 1000 Biology, 4 Sep 2009" at
http://www.f1000biology.com/article/id/1163674/evaluation | J. Biol. Phys. 35(4) (2009) 383-411 | 10.1007/s10867-009-9155-y | null | q-bio.NC q-bio.MN q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent years have witnessed an increasing interest in neuron-glia
communication. This interest stems from the realization that glia participates
in cognitive functions and information processing and is involved in many brain
disorders and neurodegenerative diseases. An important process in neuron-glia
communications is astrocyte encoding of synaptic information transfer: the
modulation of intracellular calcium dynamics in astrocytes in response to
synaptic activity. Here, we derive and investigate a concise mathematical model
for glutamate-induced astrocytic intracellular Ca2+ dynamics that captures the
essential biochemical features of the regulatory pathway of inositol
1,4,5-trisphosphate (IP3). Starting from the well-known two-state Li-Rinzel
model for calcium-induced-calcium release, we incorporate the regulation of the
IP3 production and phosphorylation. Doing so we extended it to a three-state
model (referred as the G-ChI model), that could account for Ca2+ oscillations
triggered by endogenous IP3 metabolism as well as by IP3 production by external
glutamate signals. Compared to previous similar models, our three-state models
include a more realistic description of the IP3 production and degradation
pathways, lumping together their essential nonlinearities within a concise
formulation. Using bifurcation analysis and time simulations, we demonstrate
the existence of new putative dynamical features. The cross-couplings between
IP3 and Ca2+ pathways endows the system with self-consistent oscillator
properties and favor mixed frequency-amplitude encoding modes over pure
amplitude modulation ones. These and additional results of our model are in
general agreement with available experimental data and may have important
implications on the role of astrocytes in the synaptic transfer of information.
| [
{
"created": "Wed, 16 Dec 2009 05:51:55 GMT",
"version": "v1"
}
] | 2009-12-17 | [
[
"De Pitta`",
"Maurizio",
"",
"2 and 3"
],
[
"Goldberg",
"Mati",
"",
"2 and 3"
],
[
"Volman",
"Vladislav",
"",
"2 and 3"
],
[
"Berry",
"Hugues",
"",
"1 and 2"
],
[
"Ben-Jacob",
"Eshel",
"",
"1 and 2"
]
] | Recent years have witnessed an increasing interest in neuron-glia communication. This interest stems from the realization that glia participates in cognitive functions and information processing and is involved in many brain disorders and neurodegenerative diseases. An important process in neuron-glia communications is astrocyte encoding of synaptic information transfer: the modulation of intracellular calcium dynamics in astrocytes in response to synaptic activity. Here, we derive and investigate a concise mathematical model for glutamate-induced astrocytic intracellular Ca2+ dynamics that captures the essential biochemical features of the regulatory pathway of inositol 1,4,5-trisphosphate (IP3). Starting from the well-known two-state Li-Rinzel model for calcium-induced-calcium release, we incorporate the regulation of the IP3 production and phosphorylation. Doing so we extended it to a three-state model (referred as the G-ChI model), that could account for Ca2+ oscillations triggered by endogenous IP3 metabolism as well as by IP3 production by external glutamate signals. Compared to previous similar models, our three-state models include a more realistic description of the IP3 production and degradation pathways, lumping together their essential nonlinearities within a concise formulation. Using bifurcation analysis and time simulations, we demonstrate the existence of new putative dynamical features. The cross-couplings between IP3 and Ca2+ pathways endows the system with self-consistent oscillator properties and favor mixed frequency-amplitude encoding modes over pure amplitude modulation ones. These and additional results of our model are in general agreement with available experimental data and may have important implications on the role of astrocytes in the synaptic transfer of information. |
2406.01648 | Craig McKenzie | Craig I. McKenzie | Consciousness defined: requirements for biological and artificial
general intelligence | 16 pages, 1 figure, 2 tables, 74 references | null | null | null | q-bio.NC cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Consciousness is notoriously hard to define with objective terms. An
objective definition of consciousness is critically needed so that we might
accurately understand how consciousness and resultant choice behaviour may
arise in biological or artificial systems. Many theories have integrated
neurobiological and psychological research to explain how consciousness might
arise, but few, if any, outline what is fundamentally required to generate
consciousness. To identify such requirements, I examine current theories of
consciousness and corresponding scientific research to generate a new
definition of consciousness from first principles. Critically, consciousness is
the apparatus that provides the ability to make decisions, but it is not
defined by the decision itself. As such, a definition of consciousness does not
require choice behaviour or an explicit awareness of temporality despite both
being well-characterised outcomes of conscious thought. Rather, requirements
for consciousness include: at least some capability for perception, a memory
for the storage of such perceptual information which in turn provides a
framework for an imagination with which a sense of self can be capable of
making decisions based on possible and desired futures. Thought experiments and
observable neurological phenomena demonstrate that these components are
fundamentally required of consciousness, whereby the loss of any one component
removes the capability for conscious thought. Identifying these requirements
provides a new definition for consciousness by which we can objectively
determine consciousness in any conceivable agent, such as non-human animals and
artificially intelligent systems.
| [
{
"created": "Mon, 3 Jun 2024 14:20:56 GMT",
"version": "v1"
}
] | 2024-06-05 | [
[
"McKenzie",
"Craig I.",
""
]
] | Consciousness is notoriously hard to define with objective terms. An objective definition of consciousness is critically needed so that we might accurately understand how consciousness and resultant choice behaviour may arise in biological or artificial systems. Many theories have integrated neurobiological and psychological research to explain how consciousness might arise, but few, if any, outline what is fundamentally required to generate consciousness. To identify such requirements, I examine current theories of consciousness and corresponding scientific research to generate a new definition of consciousness from first principles. Critically, consciousness is the apparatus that provides the ability to make decisions, but it is not defined by the decision itself. As such, a definition of consciousness does not require choice behaviour or an explicit awareness of temporality despite both being well-characterised outcomes of conscious thought. Rather, requirements for consciousness include: at least some capability for perception, a memory for the storage of such perceptual information which in turn provides a framework for an imagination with which a sense of self can be capable of making decisions based on possible and desired futures. Thought experiments and observable neurological phenomena demonstrate that these components are fundamentally required of consciousness, whereby the loss of any one component removes the capability for conscious thought. Identifying these requirements provides a new definition for consciousness by which we can objectively determine consciousness in any conceivable agent, such as non-human animals and artificially intelligent systems. |
2102.04296 | Jacob Bradley | Jacob R. Bradley and Timothy I. Cannings | Data-driven design of targeted gene panels for estimating immunotherapy
biomarkers | 21 pages, 10 figures | null | null | null | q-bio.GN stat.AP | http://creativecommons.org/licenses/by/4.0/ | We introduce a novel data-driven framework for the design of targeted gene
panels for estimating exome-wide biomarkers in cancer immunotherapy. Our first
goal is to develop a generative model for the profile of mutation across the
exome, which allows for gene- and variant type-dependent mutation rates. Based
on this model, we then propose a new procedure for estimating biomarkers such
as tumour mutation burden and tumour indel nurden. Our approach allows the
practitioner to select a targeted gene panel of a prespecified size, and then
construct an estimator that only depends on the selected genes. Alternatively,
the practitioner may apply our method to make predictions based on an existing
gene panel, or to augment a gene panel to a given size. We demonstrate the
excellent performance of our proposal using data from three non-small cell lung
cancer studies, as well as data from six other cancer types.
| [
{
"created": "Mon, 8 Feb 2021 16:03:05 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Feb 2021 13:48:47 GMT",
"version": "v2"
},
{
"created": "Thu, 3 Feb 2022 18:50:48 GMT",
"version": "v3"
}
] | 2022-02-04 | [
[
"Bradley",
"Jacob R.",
""
],
[
"Cannings",
"Timothy I.",
""
]
] | We introduce a novel data-driven framework for the design of targeted gene panels for estimating exome-wide biomarkers in cancer immunotherapy. Our first goal is to develop a generative model for the profile of mutation across the exome, which allows for gene- and variant type-dependent mutation rates. Based on this model, we then propose a new procedure for estimating biomarkers such as tumour mutation burden and tumour indel nurden. Our approach allows the practitioner to select a targeted gene panel of a prespecified size, and then construct an estimator that only depends on the selected genes. Alternatively, the practitioner may apply our method to make predictions based on an existing gene panel, or to augment a gene panel to a given size. We demonstrate the excellent performance of our proposal using data from three non-small cell lung cancer studies, as well as data from six other cancer types. |
2212.05064 | Jianing Xi | Jianing Xi, Zhen Deng, Yang Liu, Qian Wang, Wen Shi | Integrating multi-type aberrations from DNA and RNA through dynamic
mapping gene space for subtype-specific breast cancer driver discovery | 14 pages, 5 figures, 1 table | null | 10.7717/peerj.14843 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Driver event discovery is a crucial demand for breast cancer diagnosis and
therapy. Especially, discovering subtype-specificity of drivers can prompt the
personalized biomarker discovery and precision treatment of cancer patients.
still, most of the existing computational driver discovery studies mainly
exploit the information from DNA aberrations and gene interactions. Notably,
cancer driver events would occur due to not only DNA aberrations but also RNA
alternations, but integrating multi-type aberrations from both DNA and RNA is
still a challenging task for breast cancer drivers. On the one hand, the data
formats of different aberration types also differ from each other, known as
data format incompatibility. One the other hand, different types of aberrations
demonstrate distinct patterns across samples, known as aberration type
heterogeneity. To promote the integrated analysis of subtype-specific breast
cancer drivers, we design a "splicing-and-fusing" framework to address the
issues of data format incompatibility and aberration type heterogeneity
respectively. To overcome the data format incompatibility, the "splicing-step"
employs a knowledge graph structure to connect multi-type aberrations from the
DNA and RNA data into a unified formation. To tackle the aberration type
heterogeneity, the "fusing-step" adopts a dynamic mapping gene space
integration approach to represent the multi-type information by vectorized
profiles. The experiments also demonstrate the advantages of our approach in
both the integration of multi-type aberrations from DNA and RNA and the
discovery of subtype-specific breast cancer drivers. In summary, our
"splicing-and-fusing" framework with knowledge graph connection and dynamic
mapping gene space fusion of multi-type aberrations data from DNA and RNA can
successfully discover potential breast cancer drivers with subtype-specificity
indication.
| [
{
"created": "Fri, 9 Dec 2022 16:53:46 GMT",
"version": "v1"
}
] | 2023-02-06 | [
[
"Xi",
"Jianing",
""
],
[
"Deng",
"Zhen",
""
],
[
"Liu",
"Yang",
""
],
[
"Wang",
"Qian",
""
],
[
"Shi",
"Wen",
""
]
] | Driver event discovery is a crucial demand for breast cancer diagnosis and therapy. Especially, discovering subtype-specificity of drivers can prompt the personalized biomarker discovery and precision treatment of cancer patients. still, most of the existing computational driver discovery studies mainly exploit the information from DNA aberrations and gene interactions. Notably, cancer driver events would occur due to not only DNA aberrations but also RNA alternations, but integrating multi-type aberrations from both DNA and RNA is still a challenging task for breast cancer drivers. On the one hand, the data formats of different aberration types also differ from each other, known as data format incompatibility. One the other hand, different types of aberrations demonstrate distinct patterns across samples, known as aberration type heterogeneity. To promote the integrated analysis of subtype-specific breast cancer drivers, we design a "splicing-and-fusing" framework to address the issues of data format incompatibility and aberration type heterogeneity respectively. To overcome the data format incompatibility, the "splicing-step" employs a knowledge graph structure to connect multi-type aberrations from the DNA and RNA data into a unified formation. To tackle the aberration type heterogeneity, the "fusing-step" adopts a dynamic mapping gene space integration approach to represent the multi-type information by vectorized profiles. The experiments also demonstrate the advantages of our approach in both the integration of multi-type aberrations from DNA and RNA and the discovery of subtype-specific breast cancer drivers. In summary, our "splicing-and-fusing" framework with knowledge graph connection and dynamic mapping gene space fusion of multi-type aberrations data from DNA and RNA can successfully discover potential breast cancer drivers with subtype-specificity indication. |
0906.2504 | Chiu Fan Lee | Chiu Fan Lee | Isotropic-nematic phase transition in amyloid fibrilization | Typos corrected | Physical Review E 80, 031902 (2009) | 10.1103/PhysRevE.80.031902 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We carry out a theoretical study on the isotropic-nematic phase transition
and phase separation in amyloid fibril solutions. Borrowing the thermodynamic
model employed in the study of cylindrical micelles, we investigate the
variations in the fibril length distribution and phase behavior with respect to
changes in the protein concentration, fibril's rigidity, and binding energy. We
then relate our theoretical findings to the nematic ordering observed in Hen
Lysozyme fibril solution.
| [
{
"created": "Sat, 13 Jun 2009 21:36:05 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Jun 2009 19:12:03 GMT",
"version": "v2"
},
{
"created": "Tue, 11 Aug 2009 14:10:35 GMT",
"version": "v3"
}
] | 2009-09-10 | [
[
"Lee",
"Chiu Fan",
""
]
] | We carry out a theoretical study on the isotropic-nematic phase transition and phase separation in amyloid fibril solutions. Borrowing the thermodynamic model employed in the study of cylindrical micelles, we investigate the variations in the fibril length distribution and phase behavior with respect to changes in the protein concentration, fibril's rigidity, and binding energy. We then relate our theoretical findings to the nematic ordering observed in Hen Lysozyme fibril solution. |
q-bio/0411018 | Christof Aegerter | Tinri Aegerter-Wilmsen, Christof M. Aegerter and Ton Bisseling | Model for the robust establishment of precise proportions in the early
Drosophila embryo | 20 pages, 2 figures, accepted for publication in J. theor. Biol | null | null | null | q-bio.CB | null | During embryonic development, a spatial pattern is formed in which
proportions are established precisely. As an early pattern formation step in
Drosophila embryos, an anterior-posterior gradient of Bicoid (Bcd) induces
hunchback (hb) expression (Driever et al. 1989; Tautz et al. 1988). In contrast
to the Bcd gradient, the Hb profile includes information about the scale of the
embryo. Furthermore, the resulting hb expression pattern shows a much lower
embryo-to-embryo variability than the Bcd gradient (Houchmandzadeh et al.
2002). An additional graded posterior repressing activity could theoretically
account for the observed scaling. However, we show that such a model cannot
produce the observed precision in the Hb boundary, such that a fundamentally
different mechanism must be at work. We describe and simulate a model that can
account for the observed precise generation of the scaled Hb profile in a
highly robust manner. The proposed mechanism includes Staufen (Stau), an RNA
binding protein that appears essential to precision scaling (Houchmandzadeh et
al. 2002). In the model, Stau is released from both ends of the embryo and
relocalises hb RNA by increasing its mobility. This leads to an effective
transport of hb away from the respective Stau sources. The balance between
these opposing effects then gives rise to scaling and precision. Considering
the biological importance of robust precision scaling and the simplicity of the
model, the same principle may be employed more often during development.
| [
{
"created": "Thu, 4 Nov 2004 12:52:55 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Aegerter-Wilmsen",
"Tinri",
""
],
[
"Aegerter",
"Christof M.",
""
],
[
"Bisseling",
"Ton",
""
]
] | During embryonic development, a spatial pattern is formed in which proportions are established precisely. As an early pattern formation step in Drosophila embryos, an anterior-posterior gradient of Bicoid (Bcd) induces hunchback (hb) expression (Driever et al. 1989; Tautz et al. 1988). In contrast to the Bcd gradient, the Hb profile includes information about the scale of the embryo. Furthermore, the resulting hb expression pattern shows a much lower embryo-to-embryo variability than the Bcd gradient (Houchmandzadeh et al. 2002). An additional graded posterior repressing activity could theoretically account for the observed scaling. However, we show that such a model cannot produce the observed precision in the Hb boundary, such that a fundamentally different mechanism must be at work. We describe and simulate a model that can account for the observed precise generation of the scaled Hb profile in a highly robust manner. The proposed mechanism includes Staufen (Stau), an RNA binding protein that appears essential to precision scaling (Houchmandzadeh et al. 2002). In the model, Stau is released from both ends of the embryo and relocalises hb RNA by increasing its mobility. This leads to an effective transport of hb away from the respective Stau sources. The balance between these opposing effects then gives rise to scaling and precision. Considering the biological importance of robust precision scaling and the simplicity of the model, the same principle may be employed more often during development. |
1011.0241 | Christopher Hillar | Guy Isely, Christopher J. Hillar, Friedrich T. Sommer | Deciphering subsampled data: adaptive compressive sampling as a
principle of brain communication | 9 pages, NIPS 2010 | null | null | null | q-bio.NC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A new algorithm is proposed for a) unsupervised learning of sparse
representations from subsampled measurements and b) estimating the parameters
required for linearly reconstructing signals from the sparse codes. We verify
that the new algorithm performs efficient data compression on par with the
recent method of compressive sampling. Further, we demonstrate that the
algorithm performs robustly when stacked in several stages or when applied in
undercomplete or overcomplete situations. The new algorithm can explain how
neural populations in the brain that receive subsampled input through fiber
bottlenecks are able to form coherent response properties.
| [
{
"created": "Mon, 1 Nov 2010 03:23:52 GMT",
"version": "v1"
}
] | 2010-11-02 | [
[
"Isely",
"Guy",
""
],
[
"Hillar",
"Christopher J.",
""
],
[
"Sommer",
"Friedrich T.",
""
]
] | A new algorithm is proposed for a) unsupervised learning of sparse representations from subsampled measurements and b) estimating the parameters required for linearly reconstructing signals from the sparse codes. We verify that the new algorithm performs efficient data compression on par with the recent method of compressive sampling. Further, we demonstrate that the algorithm performs robustly when stacked in several stages or when applied in undercomplete or overcomplete situations. The new algorithm can explain how neural populations in the brain that receive subsampled input through fiber bottlenecks are able to form coherent response properties. |
1702.03474 | Cheng Ly | Andrea K. Barreiro, Cheng Ly | Practical Approximation Method for Firing Rate Models of Coupled Neural
Networks with Correlated Inputs | 15 pages, 7 figures | Phys. Rev. E 96, 022413 (2017) | 10.1103/PhysRevE.96.022413 | null | q-bio.NC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rapid experimental advances now enable simultaneous electrophysiological
recording of neural activity at single-cell resolution across large regions of
the nervous system. Models of this neural network activity will necessarily
increase in size and complexity, thus increasing the computational cost of
simulating them and the challenge of analyzing them. Here we present a novel
method to approximate the activity and firing statistics of a general firing
rate network model (of Wilson-Cowan type) subject to noisy correlated
background inputs. The method requires solving a system of transcendental
equations and is fast compared to Monte Carlo simulations of coupled stochastic
differential equations. We implement the method with several examples of
coupled neural networks and show that the results are quantitatively accurate
even with moderate coupling strengths and an appreciable amount of
heterogeneity in many parameters. This work should be useful for investigating
how various neural attributes qualitatively effect the spiking statistics of
coupled neural networks. Matlab code implementing the method is freely
available at GitHub (\url{http://github.com/chengly70/FiringRateModReduction}).
| [
{
"created": "Sun, 12 Feb 2017 00:52:12 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Jun 2017 12:07:20 GMT",
"version": "v2"
},
{
"created": "Fri, 4 Aug 2017 16:22:00 GMT",
"version": "v3"
},
{
"created": "Fri, 29 Sep 2017 11:57:42 GMT",
"version": "v4"
}
] | 2017-10-02 | [
[
"Barreiro",
"Andrea K.",
""
],
[
"Ly",
"Cheng",
""
]
] | Rapid experimental advances now enable simultaneous electrophysiological recording of neural activity at single-cell resolution across large regions of the nervous system. Models of this neural network activity will necessarily increase in size and complexity, thus increasing the computational cost of simulating them and the challenge of analyzing them. Here we present a novel method to approximate the activity and firing statistics of a general firing rate network model (of Wilson-Cowan type) subject to noisy correlated background inputs. The method requires solving a system of transcendental equations and is fast compared to Monte Carlo simulations of coupled stochastic differential equations. We implement the method with several examples of coupled neural networks and show that the results are quantitatively accurate even with moderate coupling strengths and an appreciable amount of heterogeneity in many parameters. This work should be useful for investigating how various neural attributes qualitatively effect the spiking statistics of coupled neural networks. Matlab code implementing the method is freely available at GitHub (\url{http://github.com/chengly70/FiringRateModReduction}). |
1603.01880 | Jannis Schuecker | Jannis Schuecker, Sven Goedeke and Moritz Helias | Optimal sequence memory in driven random networks | null | Phys. Rev. X 8, 041029 (2018) | 10.1103/PhysRevX.8.041029 | null | q-bio.NC nlin.CD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous randomly coupled neural networks display a transition to chaos at
a critical coupling strength. We here investigate the effect of a time-varying
input on the onset of chaos and the resulting consequences for information
processing. Dynamic mean-field theory yields the statistics of the activity,
the maximum Lyapunov exponent, and the memory capacity of the network. We find
an exact condition that determines the transition from stable to chaotic
dynamics and the sequential memory capacity in closed form. The input
suppresses chaos by a dynamic mechanism, shifting the transition to
significantly larger coupling strengths than predicted by local stability
analysis. Beyond linear stability, a regime of coexistent locally expansive,
but non-chaotic dynamics emerges that optimizes the capacity of the network to
store sequential input.
| [
{
"created": "Sun, 6 Mar 2016 21:21:03 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Jun 2016 16:29:48 GMT",
"version": "v2"
},
{
"created": "Fri, 22 Sep 2017 17:22:28 GMT",
"version": "v3"
}
] | 2018-11-21 | [
[
"Schuecker",
"Jannis",
""
],
[
"Goedeke",
"Sven",
""
],
[
"Helias",
"Moritz",
""
]
] | Autonomous randomly coupled neural networks display a transition to chaos at a critical coupling strength. We here investigate the effect of a time-varying input on the onset of chaos and the resulting consequences for information processing. Dynamic mean-field theory yields the statistics of the activity, the maximum Lyapunov exponent, and the memory capacity of the network. We find an exact condition that determines the transition from stable to chaotic dynamics and the sequential memory capacity in closed form. The input suppresses chaos by a dynamic mechanism, shifting the transition to significantly larger coupling strengths than predicted by local stability analysis. Beyond linear stability, a regime of coexistent locally expansive, but non-chaotic dynamics emerges that optimizes the capacity of the network to store sequential input. |
2311.12917 | Ethan Kulman | E. Kulman, R. Kuang, Q. Morris | Orchard: building large cancer phylogenies using stochastic
combinatorial search | null | null | null | null | q-bio.PE cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Phylogenies depicting the evolutionary history of genetically heterogeneous
subpopulations of cells from the same cancer, i.e., cancer phylogenies, offer
valuable insights about cancer development and guide treatment strategies. Many
methods exist that reconstruct cancer phylogenies using point mutations
detected with bulk DNA sequencing. However, these methods become inaccurate
when reconstructing phylogenies with more than 30 mutations, or, in some cases,
fail to recover a phylogeny altogether. Here, we introduce Orchard, a cancer
phylogeny reconstruction algorithm that is fast and accurate using up to 1000
mutations. Orchard samples without replacement from a factorized approximation
of the posterior distribution over phylogenies, a novel result derived in this
paper. Each factor in this approximate posterior corresponds to a conditional
distribution for adding a new mutation to a partially built phylogeny. Orchard
optimizes each factor sequentially, generating a sequence of incrementally
larger phylogenies that ultimately culminate in a complete tree containing all
mutations. Our evaluations demonstrate that Orchard outperforms
state-of-the-art cancer phylogeny reconstruction methods in reconstructing more
plausible phylogenies across 90 simulated cancers and 14 B-progenitor acute
lymphoblastic leukemias (B-ALLs). Remarkably, Orchard accurately reconstructs
cancer phylogenies using up to 1,000 mutations. Additionally, we demonstrate
that the large and accurate phylogenies reconstructed by Orchard are useful for
identifying patterns of somatic mutations and genetic variations among distinct
cancer cell subpopulations.
| [
{
"created": "Tue, 21 Nov 2023 18:25:23 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Jul 2024 00:32:49 GMT",
"version": "v2"
}
] | 2024-07-11 | [
[
"Kulman",
"E.",
""
],
[
"Kuang",
"R.",
""
],
[
"Morris",
"Q.",
""
]
] | Phylogenies depicting the evolutionary history of genetically heterogeneous subpopulations of cells from the same cancer, i.e., cancer phylogenies, offer valuable insights about cancer development and guide treatment strategies. Many methods exist that reconstruct cancer phylogenies using point mutations detected with bulk DNA sequencing. However, these methods become inaccurate when reconstructing phylogenies with more than 30 mutations, or, in some cases, fail to recover a phylogeny altogether. Here, we introduce Orchard, a cancer phylogeny reconstruction algorithm that is fast and accurate using up to 1000 mutations. Orchard samples without replacement from a factorized approximation of the posterior distribution over phylogenies, a novel result derived in this paper. Each factor in this approximate posterior corresponds to a conditional distribution for adding a new mutation to a partially built phylogeny. Orchard optimizes each factor sequentially, generating a sequence of incrementally larger phylogenies that ultimately culminate in a complete tree containing all mutations. Our evaluations demonstrate that Orchard outperforms state-of-the-art cancer phylogeny reconstruction methods in reconstructing more plausible phylogenies across 90 simulated cancers and 14 B-progenitor acute lymphoblastic leukemias (B-ALLs). Remarkably, Orchard accurately reconstructs cancer phylogenies using up to 1,000 mutations. Additionally, we demonstrate that the large and accurate phylogenies reconstructed by Orchard are useful for identifying patterns of somatic mutations and genetic variations among distinct cancer cell subpopulations. |
2010.07417 | Adriaan-Alexander Ludl | Adriaan-Alexander Ludl and Tom Michoel | Comparison between instrumental variable and mediation-based methods for
reconstructing causal gene networks in yeast | null | Molecular Omics, 2021,17, 241-251 | 10.1039/D0MO00140F | null | q-bio.MN q-bio.GN stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Causal gene networks model the flow of information within a cell, but
reconstructing them from omics data is challenging because correlation does not
imply causation. Combining genomics and transcriptomics data from a segregating
population allows to orient the direction of causality between gene expression
traits using genomic variants. Instrumental-variable methods (IV) use a local
expression quantitative trait locus (eQTL) as a randomized instrument for a
gene's expression level, and assign target genes based on distal eQTL
associations. Mediation-based methods (ME) additionally require that distal
eQTL associations are mediated by the source gene. Here we used Findr, a
software providing uniform implementations of IV, ME, and coexpression-based
methods, a recent dataset of 1,012 segregants from a cross between two budding
yeast strains, and the YEASTRACT database of known transcriptional interactions
to compare causal gene network inference methods. We found that causal
inference methods result in a significant overlap with the ground-truth,
whereas coexpression did not perform better than random. A subsampling analysis
revealed that the performance of ME decreases at large sample sizes, due to a
loss of sensitivity when residual correlations become significant. IV methods
contain false positive predictions, due to genomic linkage between eQTL
instruments. IV and ME methods also have complementary roles for identifying
causal genes underlying transcriptional hotspots. IV methods correctly
predicted STB5 targets for a hotspot centred on the transcription factor STB5,
whereas ME failed due to Stb5p auto-regulating its own expression. ME suggests
a new candidate gene, DNM1, for a hotspot on Chr XII, where IV methods could
not distinguish between multiple genes located within the hotspot.
| [
{
"created": "Wed, 14 Oct 2020 22:02:23 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Nov 2020 22:21:57 GMT",
"version": "v2"
}
] | 2021-10-29 | [
[
"Ludl",
"Adriaan-Alexander",
""
],
[
"Michoel",
"Tom",
""
]
] | Causal gene networks model the flow of information within a cell, but reconstructing them from omics data is challenging because correlation does not imply causation. Combining genomics and transcriptomics data from a segregating population allows to orient the direction of causality between gene expression traits using genomic variants. Instrumental-variable methods (IV) use a local expression quantitative trait locus (eQTL) as a randomized instrument for a gene's expression level, and assign target genes based on distal eQTL associations. Mediation-based methods (ME) additionally require that distal eQTL associations are mediated by the source gene. Here we used Findr, a software providing uniform implementations of IV, ME, and coexpression-based methods, a recent dataset of 1,012 segregants from a cross between two budding yeast strains, and the YEASTRACT database of known transcriptional interactions to compare causal gene network inference methods. We found that causal inference methods result in a significant overlap with the ground-truth, whereas coexpression did not perform better than random. A subsampling analysis revealed that the performance of ME decreases at large sample sizes, due to a loss of sensitivity when residual correlations become significant. IV methods contain false positive predictions, due to genomic linkage between eQTL instruments. IV and ME methods also have complementary roles for identifying causal genes underlying transcriptional hotspots. IV methods correctly predicted STB5 targets for a hotspot centred on the transcription factor STB5, whereas ME failed due to Stb5p auto-regulating its own expression. ME suggests a new candidate gene, DNM1, for a hotspot on Chr XII, where IV methods could not distinguish between multiple genes located within the hotspot. |
1807.04788 | Ethan Romero-Severson | Dmitry Gromov, Ingo Bulla, Ethan Obie Romero-Severson | Systematic evaluation of the population-level effects of alternative
treatment strategies on the basic reproduction number | null | null | 10.1016/j.jtbi.2018.11.029 | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | An approach to estimate the influence of the treatment-type controls on the
basic reproduction number, R 0 , is proposed and elaborated. The presented
approach allows one to estimate the effect of a given treatment strategy or to
compare a number of different treatment strategies on the basic reproduction
number. All our results are valid for sufficiently small values of the control.
However, in many cases it is possible to extend this analysis to larger values
of the control as was illustrated by examples.
| [
{
"created": "Thu, 12 Jul 2018 18:54:32 GMT",
"version": "v1"
}
] | 2023-05-30 | [
[
"Gromov",
"Dmitry",
""
],
[
"Bulla",
"Ingo",
""
],
[
"Romero-Severson",
"Ethan Obie",
""
]
] | An approach to estimate the influence of the treatment-type controls on the basic reproduction number, R 0 , is proposed and elaborated. The presented approach allows one to estimate the effect of a given treatment strategy or to compare a number of different treatment strategies on the basic reproduction number. All our results are valid for sufficiently small values of the control. However, in many cases it is possible to extend this analysis to larger values of the control as was illustrated by examples. |
2010.09483 | Jeremy Georges-Filteau | Jeremy Georges-Filteau, Richard C. Hamelin and Mathieu Blanchette | Mycorrhiza: Genotype Assignment usingPhylogenetic Networks | null | Bioinformatics, Volume 36, Issue 1, 1 January 2020 | 10.1093/bioinformatics/btz476 | null | q-bio.QM cs.LG q-bio.GN q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation The genotype assignment problem consists of predicting, from the
genotype of an individual, which of a known set of populations it originated
from. The problem arises in a variety of contexts, including wildlife
forensics, invasive species detection and biodiversity monitoring. Existing
approaches perform well under ideal conditions but are sensitive to a variety
of common violations of the assumptions they rely on. Results In this article,
we introduce Mycorrhiza, a machine learning approach for the genotype
assignment problem. Our algorithm makes use of phylogenetic networks to
engineer features that encode the evolutionary relationships among samples.
Those features are then used as input to a Random Forests classifier. The
classification accuracy was assessed on multiple published empirical SNP,
microsatellite or consensus sequence datasets with wide ranges of size,
geographical distribution and population structure and on simulated datasets.
It compared favorably against widely used assessment tests or mixture analysis
methods such as STRUCTURE and Admixture, and against another machine-learning
based approach using principal component analysis for dimensionality reduction.
Mycorrhiza yields particularly significant gains on datasets with a large
average fixation index (FST) or deviation from the Hardy-Weinberg equilibrium.
Moreover, the phylogenetic network approach estimates mixture proportions with
good accuracy.
| [
{
"created": "Wed, 14 Oct 2020 02:36:27 GMT",
"version": "v1"
}
] | 2020-10-20 | [
[
"Georges-Filteau",
"Jeremy",
""
],
[
"Hamelin",
"Richard C.",
""
],
[
"Blanchette",
"Mathieu",
""
]
] | Motivation The genotype assignment problem consists of predicting, from the genotype of an individual, which of a known set of populations it originated from. The problem arises in a variety of contexts, including wildlife forensics, invasive species detection and biodiversity monitoring. Existing approaches perform well under ideal conditions but are sensitive to a variety of common violations of the assumptions they rely on. Results In this article, we introduce Mycorrhiza, a machine learning approach for the genotype assignment problem. Our algorithm makes use of phylogenetic networks to engineer features that encode the evolutionary relationships among samples. Those features are then used as input to a Random Forests classifier. The classification accuracy was assessed on multiple published empirical SNP, microsatellite or consensus sequence datasets with wide ranges of size, geographical distribution and population structure and on simulated datasets. It compared favorably against widely used assessment tests or mixture analysis methods such as STRUCTURE and Admixture, and against another machine-learning based approach using principal component analysis for dimensionality reduction. Mycorrhiza yields particularly significant gains on datasets with a large average fixation index (FST) or deviation from the Hardy-Weinberg equilibrium. Moreover, the phylogenetic network approach estimates mixture proportions with good accuracy. |
1206.4386 | Liane Gabora | Liane Gabora | An Evolutionary Framework for Culture: Selectionism versus Communal
Exchange | 18 pages; 2 tables and 11 figures embedded in text | Physics of Life Reviews, 10(2), 117-145 (2013) | 10.1016/j.plrev.2013.03.006 | null | q-bio.PE nlin.AO q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dawkins' replicator-based conception of evolution has led to widespread
mis-application selectionism across the social sciences because it does not
address the paradox that inspired the theory of natural selection in the first
place: how do organisms accumulate change when traits acquired over their
lifetime are obliterated? This is addressed by von Neumann's concept of a
self-replicating automaton (SRA). A SRA consists of a self-assembly code that
is used in two distinct ways: (1) actively deciphered during development to
construct a self-similar replicant, and (2) passively copied to the replicant
to ensure that it can reproduce. Information that is acquired over a lifetime
is not transmitted to offspring, whereas information that is inherited during
copying is transmitted. In cultural evolution there is no mechanism for
discarding acquired change. Acquired change can accumulate orders of magnitude
faster than, and quickly overwhelm, inherited change due to differential
replication of variants in response to selection. This prohibits a selectionist
but not an evolutionary framework for culture. Recent work on the origin of
life suggests that early life evolved through a non-Darwinian process referred
to as communal exchange that does not involve a self-assembly code, and that
natural selection emerged from this more haphazard, ancestral evolutionary
process. It is proposed that communal exchange provides a more appropriate
evolutionary framework for culture than selectionism. This is supported by a
computational model of cultural evolution and a network-based program for
documenting material cultural history, and it is consistent with high levels of
human cooperation.
| [
{
"created": "Wed, 20 Jun 2012 05:38:06 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Aug 2013 17:06:46 GMT",
"version": "v2"
},
{
"created": "Sun, 30 Jun 2019 01:51:37 GMT",
"version": "v3"
}
] | 2019-07-02 | [
[
"Gabora",
"Liane",
""
]
] | Dawkins' replicator-based conception of evolution has led to widespread mis-application selectionism across the social sciences because it does not address the paradox that inspired the theory of natural selection in the first place: how do organisms accumulate change when traits acquired over their lifetime are obliterated? This is addressed by von Neumann's concept of a self-replicating automaton (SRA). A SRA consists of a self-assembly code that is used in two distinct ways: (1) actively deciphered during development to construct a self-similar replicant, and (2) passively copied to the replicant to ensure that it can reproduce. Information that is acquired over a lifetime is not transmitted to offspring, whereas information that is inherited during copying is transmitted. In cultural evolution there is no mechanism for discarding acquired change. Acquired change can accumulate orders of magnitude faster than, and quickly overwhelm, inherited change due to differential replication of variants in response to selection. This prohibits a selectionist but not an evolutionary framework for culture. Recent work on the origin of life suggests that early life evolved through a non-Darwinian process referred to as communal exchange that does not involve a self-assembly code, and that natural selection emerged from this more haphazard, ancestral evolutionary process. It is proposed that communal exchange provides a more appropriate evolutionary framework for culture than selectionism. This is supported by a computational model of cultural evolution and a network-based program for documenting material cultural history, and it is consistent with high levels of human cooperation. |
1607.05386 | Jin Xu | Dong-Ho Park, Taegeun Song, Danh-Tai Hoang, Jin Xu, Junghyo Jo | A Local Counter-Regulatory Motif Modulates the Global Phase of Hormonal
Oscillations | null | null | 10.1038/s41598-017-01806-0 | null | q-bio.TO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Counter-regulatory elements maintain dynamic equilibrium ubiquitously in
living systems. The most prominent example, which is critical to mammalian
survival, is that of pancreatic {\alpha} and {\beta} cells producing glucagon
and insulin for glucose homeostasis. These cells are not found in a single
gland but are dispersed in multiple micro-organs known as the islets of
Langerhans. Within an islet, these two reciprocal cell types interact with each
other and with an additional cell type: the {\delta} cell. By testing all
possible motifs governing the interactions of these three cell types, we found
that a unique set of positive/negative intra-islet interactions between
different islet cell types functions not only to reduce the superficially
wasteful zero-sum action of glucagon and insulin but also to enhance/suppress
the synchronization of hormone secretions between islets under high/normal
glucose conditions. This anti-symmetric interaction motif confers effective
controllability for network (de)synchronization.
| [
{
"created": "Tue, 19 Jul 2016 02:58:20 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Feb 2024 01:10:55 GMT",
"version": "v2"
}
] | 2024-02-14 | [
[
"Park",
"Dong-Ho",
""
],
[
"Song",
"Taegeun",
""
],
[
"Hoang",
"Danh-Tai",
""
],
[
"Xu",
"Jin",
""
],
[
"Jo",
"Junghyo",
""
]
] | Counter-regulatory elements maintain dynamic equilibrium ubiquitously in living systems. The most prominent example, which is critical to mammalian survival, is that of pancreatic {\alpha} and {\beta} cells producing glucagon and insulin for glucose homeostasis. These cells are not found in a single gland but are dispersed in multiple micro-organs known as the islets of Langerhans. Within an islet, these two reciprocal cell types interact with each other and with an additional cell type: the {\delta} cell. By testing all possible motifs governing the interactions of these three cell types, we found that a unique set of positive/negative intra-islet interactions between different islet cell types functions not only to reduce the superficially wasteful zero-sum action of glucagon and insulin but also to enhance/suppress the synchronization of hormone secretions between islets under high/normal glucose conditions. This anti-symmetric interaction motif confers effective controllability for network (de)synchronization. |
2310.04463 | Siyuan Guo | Siyuan Guo and Jihong Guan and Shuigeng Zhou | Diffusing on Two Levels and Optimizing for Multiple Properties: A Novel
Approach to Generating Molecules with Desirable Properties | null | null | null | null | q-bio.BM cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the past decade, Artificial Intelligence driven drug design and discovery
has been a hot research topic, where an important branch is molecule generation
by generative models, from GAN-based models and VAE-based models to the latest
diffusion-based models. However, most existing models pursue only the basic
properties like validity and uniqueness of the generated molecules, a few go
further to explicitly optimize one single important molecular property (e.g.
QED or PlogP), which makes most generated molecules little usefulness in
practice. In this paper, we present a novel approach to generating molecules
with desirable properties, which expands the diffusion model framework with
multiple innovative designs. The novelty is two-fold. On the one hand,
considering that the structures of molecules are complex and diverse, and
molecular properties are usually determined by some substructures (e.g.
pharmacophores), we propose to perform diffusion on two structural levels:
molecules and molecular fragments respectively, with which a mixed Gaussian
distribution is obtained for the reverse diffusion process. To get desirable
molecular fragments, we develop a novel electronic effect based fragmentation
method. On the other hand, we introduce two ways to explicitly optimize
multiple molecular properties under the diffusion model framework. First, as
potential drug molecules must be chemically valid, we optimize molecular
validity by an energy-guidance function. Second, since potential drug molecules
should be desirable in various properties, we employ a multi-objective
mechanism to optimize multiple molecular properties simultaneously. Extensive
experiments with two benchmark datasets QM9 and ZINC250k show that the
molecules generated by our proposed method have better validity, uniqueness,
novelty, Fr\'echet ChemNet Distance (FCD), QED, and PlogP than those generated
by current SOTA models.
| [
{
"created": "Thu, 5 Oct 2023 11:43:21 GMT",
"version": "v1"
}
] | 2023-10-10 | [
[
"Guo",
"Siyuan",
""
],
[
"Guan",
"Jihong",
""
],
[
"Zhou",
"Shuigeng",
""
]
] | In the past decade, Artificial Intelligence driven drug design and discovery has been a hot research topic, where an important branch is molecule generation by generative models, from GAN-based models and VAE-based models to the latest diffusion-based models. However, most existing models pursue only the basic properties like validity and uniqueness of the generated molecules, a few go further to explicitly optimize one single important molecular property (e.g. QED or PlogP), which makes most generated molecules little usefulness in practice. In this paper, we present a novel approach to generating molecules with desirable properties, which expands the diffusion model framework with multiple innovative designs. The novelty is two-fold. On the one hand, considering that the structures of molecules are complex and diverse, and molecular properties are usually determined by some substructures (e.g. pharmacophores), we propose to perform diffusion on two structural levels: molecules and molecular fragments respectively, with which a mixed Gaussian distribution is obtained for the reverse diffusion process. To get desirable molecular fragments, we develop a novel electronic effect based fragmentation method. On the other hand, we introduce two ways to explicitly optimize multiple molecular properties under the diffusion model framework. First, as potential drug molecules must be chemically valid, we optimize molecular validity by an energy-guidance function. Second, since potential drug molecules should be desirable in various properties, we employ a multi-objective mechanism to optimize multiple molecular properties simultaneously. Extensive experiments with two benchmark datasets QM9 and ZINC250k show that the molecules generated by our proposed method have better validity, uniqueness, novelty, Fr\'echet ChemNet Distance (FCD), QED, and PlogP than those generated by current SOTA models. |
1105.4387 | Kazuhiro Takemoto | Kazuhiro Takemoto | Global architecture of metabolite distributions across species and its
formation mechanisms | 14 pages, 5 figures | Biosystems 100, 8 (2010) | 10.1016/j.biosystems.2009.12.002 | null | q-bio.PE physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Living organisms produce metabolites of many types via their metabolisms.
Especially, flavonoids, a kind of secondary metabolites, of plant species are
interesting examples. Since plant species are believed to have specific
flavonoids with respect to diverse environment, elucidation of design
principles of metabolite distributions across plant species is important to
understand metabolite diversity and plant evolution. In the previous work, we
found heterogeneous connectivity in metabolite distributions, and proposed a
simple model to explain a possible origin of heterogeneous connectivity. In
this paper, we show further structural properties in the metabolite
distribution among families inspired by analogy with plant-animal mutualistic
networks: nested structure and modular structure. An earlier model represents
that these structural properties in bipartite relationships are determined
based on traits of elements and external factors. However, we find that the
architecture of metabolite distributions is described by simple evolution
processes without trait-based mechanisms by comparison between our model and
the earlier model. Our model can better predict nested structure and modular
structure in addition to heterogeneous connectivity both qualitatively and
quantitatively. This finding implies an alternative possible origin of these
structural properties, and suggests simpler formation mechanisms of metabolite
distributions across plant species than expected.
| [
{
"created": "Mon, 23 May 2011 02:16:23 GMT",
"version": "v1"
}
] | 2015-03-19 | [
[
"Takemoto",
"Kazuhiro",
""
]
] | Living organisms produce metabolites of many types via their metabolisms. Especially, flavonoids, a kind of secondary metabolites, of plant species are interesting examples. Since plant species are believed to have specific flavonoids with respect to diverse environment, elucidation of design principles of metabolite distributions across plant species is important to understand metabolite diversity and plant evolution. In the previous work, we found heterogeneous connectivity in metabolite distributions, and proposed a simple model to explain a possible origin of heterogeneous connectivity. In this paper, we show further structural properties in the metabolite distribution among families inspired by analogy with plant-animal mutualistic networks: nested structure and modular structure. An earlier model represents that these structural properties in bipartite relationships are determined based on traits of elements and external factors. However, we find that the architecture of metabolite distributions is described by simple evolution processes without trait-based mechanisms by comparison between our model and the earlier model. Our model can better predict nested structure and modular structure in addition to heterogeneous connectivity both qualitatively and quantitatively. This finding implies an alternative possible origin of these structural properties, and suggests simpler formation mechanisms of metabolite distributions across plant species than expected. |
1301.4298 | Shuji Ishihara | S. Ishihara, K. Sugimura, S.J. Cox, I. Bonnet, Y. Bellaiche, and F.
Graner | Comparative study of non-invasive force and stress inference methods in
tissue | 12 pages, 8 figures, EPJ E: Topical issue on "Physical constraints on
morphogenesis and evolution" | null | null | null | q-bio.QM cond-mat.soft q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the course of animal development, the shape of tissue emerges in part from
mechanical and biochemical interactions between cells. Measuring stress in
tissue is essential for studying morphogenesis and its physical constraints.
Experimental measurements of stress reported thus far have been invasive,
indirect, or local. One theoretical approach is force inference from cell
shapes and connectivity, which is non-invasive, can provide a space-time map of
stress and relies on prefactors. Here, to validate force- inference methods, we
performed a comparative study of them. Three force-inference methods, which
differ in their approach of treating indefiniteness in an inverse problem
between cell shapes and forces, were tested by using two artificial and two
experimental data sets. Our results using different datasets consistently
indicate that our Bayesian force inference, by which cell-junction tensions and
cell pressures are simultaneously estimated, performs best in terms of accuracy
and robustness. Moreover, by measuring the stress anisotropy and relaxation, we
cross-validated the force inference and the global annular ablation of tissue,
each of which relies on different prefactors. A practical choice of
force-inference methods in distinct systems of interest is discussed.
| [
{
"created": "Fri, 18 Jan 2013 05:12:51 GMT",
"version": "v1"
}
] | 2013-01-21 | [
[
"Ishihara",
"S.",
""
],
[
"Sugimura",
"K.",
""
],
[
"Cox",
"S. J.",
""
],
[
"Bonnet",
"I.",
""
],
[
"Bellaiche",
"Y.",
""
],
[
"Graner",
"F.",
""
]
] | In the course of animal development, the shape of tissue emerges in part from mechanical and biochemical interactions between cells. Measuring stress in tissue is essential for studying morphogenesis and its physical constraints. Experimental measurements of stress reported thus far have been invasive, indirect, or local. One theoretical approach is force inference from cell shapes and connectivity, which is non-invasive, can provide a space-time map of stress and relies on prefactors. Here, to validate force- inference methods, we performed a comparative study of them. Three force-inference methods, which differ in their approach of treating indefiniteness in an inverse problem between cell shapes and forces, were tested by using two artificial and two experimental data sets. Our results using different datasets consistently indicate that our Bayesian force inference, by which cell-junction tensions and cell pressures are simultaneously estimated, performs best in terms of accuracy and robustness. Moreover, by measuring the stress anisotropy and relaxation, we cross-validated the force inference and the global annular ablation of tissue, each of which relies on different prefactors. A practical choice of force-inference methods in distinct systems of interest is discussed. |
1702.08421 | Ines Samengo Dr. | Ines Samengo | The role of the observer in goal-directed behavior | 11 pages, 5 figures. Essay submitted to FQXi | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In goal-directed behavior, a large number of possible initial states end up
in the pursued goal. The accompanying information loss implies that
goal-oriented behavior is in one-to-one correspondence with an open subsystem
whose entropy decreases in time. Yet ultimately, the laws of physics are
reversible, so systems capable of yielding goal-directed behavior must transfer
the information about initial conditions to other degrees of freedom outside
the boundaries of the agent. To operate steadily, they must consume ordered
degrees of freedom provided as input, and be dispensed of disordered outputs
that act as wastes from the point of view of the aimed objective. Here I argue
that a physical system may or may not display goal-directed behavior depending
on what exactly is defined as the agent. The borders of the agent must be
carefully tailored so as to entail the appropriate information balance sheet.
In this game, observers play the role of tailors: They design agents by setting
the limits of the system of interest. Brain-guided subjects perform this
creative observation task naturally, implying that the observation of
goal-oriented behavior is a goal-oriented behavior in itself. Minds evolved to
cut out pieces of reality and endow them with intentionality, because ascribing
intentionality is an efficient way of modeling the world, and making
predictions. One most remarkable agent of whom we have indisputable evidence of
its goal-pursuing attitude is the self. Notably, this agent is simultaneously
the subject and the object of observation.
| [
{
"created": "Mon, 27 Feb 2017 18:27:48 GMT",
"version": "v1"
}
] | 2017-02-28 | [
[
"Samengo",
"Ines",
""
]
] | In goal-directed behavior, a large number of possible initial states end up in the pursued goal. The accompanying information loss implies that goal-oriented behavior is in one-to-one correspondence with an open subsystem whose entropy decreases in time. Yet ultimately, the laws of physics are reversible, so systems capable of yielding goal-directed behavior must transfer the information about initial conditions to other degrees of freedom outside the boundaries of the agent. To operate steadily, they must consume ordered degrees of freedom provided as input, and be dispensed of disordered outputs that act as wastes from the point of view of the aimed objective. Here I argue that a physical system may or may not display goal-directed behavior depending on what exactly is defined as the agent. The borders of the agent must be carefully tailored so as to entail the appropriate information balance sheet. In this game, observers play the role of tailors: They design agents by setting the limits of the system of interest. Brain-guided subjects perform this creative observation task naturally, implying that the observation of goal-oriented behavior is a goal-oriented behavior in itself. Minds evolved to cut out pieces of reality and endow them with intentionality, because ascribing intentionality is an efficient way of modeling the world, and making predictions. One most remarkable agent of whom we have indisputable evidence of its goal-pursuing attitude is the self. Notably, this agent is simultaneously the subject and the object of observation. |
2408.01528 | KongFatt Wong-Lin | Abdoreza Asadpour and KongFatt Wong-Lin | Can multivariate Granger causality detect directed connectivity of a
multistable and dynamic biological decision network model? | null | null | null | null | q-bio.NC cs.LG cs.NE math.DS | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Extracting causal connections can advance interpretable AI and machine
learning. Granger causality (GC) is a robust statistical method for estimating
directed influences (DC) between signals. While GC has been widely applied to
analysing neuronal signals in biological neural networks and other domains, its
application to complex, nonlinear, and multistable neural networks is less
explored. In this study, we applied time-domain multi-variate Granger causality
(MVGC) to the time series neural activity of all nodes in a trained multistable
biologically based decision neural network model with real-time decision
uncertainty monitoring. Our analysis demonstrated that challenging two-choice
decisions, where input signals could be closely matched, and the appropriate
application of fine-grained sliding time windows, could readily reveal the
original model's DC. Furthermore, the identified DC varied based on whether the
network had correct or error decisions. Integrating the identified DC from
different decision outcomes recovered most of the original model's
architecture, despite some spurious and missing connectivity. This approach
could be used as an initial exploration to enhance the interpretability and
transparency of dynamic multistable and nonlinear biological or AI systems by
revealing causal connections throughout different phases of neural network
dynamics and outcomes.
| [
{
"created": "Fri, 2 Aug 2024 18:40:15 GMT",
"version": "v1"
}
] | 2024-08-06 | [
[
"Asadpour",
"Abdoreza",
""
],
[
"Wong-Lin",
"KongFatt",
""
]
] | Extracting causal connections can advance interpretable AI and machine learning. Granger causality (GC) is a robust statistical method for estimating directed influences (DC) between signals. While GC has been widely applied to analysing neuronal signals in biological neural networks and other domains, its application to complex, nonlinear, and multistable neural networks is less explored. In this study, we applied time-domain multi-variate Granger causality (MVGC) to the time series neural activity of all nodes in a trained multistable biologically based decision neural network model with real-time decision uncertainty monitoring. Our analysis demonstrated that challenging two-choice decisions, where input signals could be closely matched, and the appropriate application of fine-grained sliding time windows, could readily reveal the original model's DC. Furthermore, the identified DC varied based on whether the network had correct or error decisions. Integrating the identified DC from different decision outcomes recovered most of the original model's architecture, despite some spurious and missing connectivity. This approach could be used as an initial exploration to enhance the interpretability and transparency of dynamic multistable and nonlinear biological or AI systems by revealing causal connections throughout different phases of neural network dynamics and outcomes. |
1407.1333 | Lennaert van Veen | Lennaert van Veen and Kevin Green | Periodic solutions to a mean-field model for electrocortical activity | 9 pages, 5 figures | null | 10.1140/epjst/e2014-02311-y | null | q-bio.NC nlin.PS physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a continuum model of electrical signals in the human cortex,
which takes the form of a system of semilinear, hyperbolic partial differential
equations for the inhibitory and excitatory membrane potentials and the
synaptic inputs. The coupling of these components is represented by sigmoidal
and quadratic nonlinearities. We consider these equations on a square domain
with periodic boundary conditions, in the vicinity of the primary transition
from a stable equilibrium to time-periodic motion through an equivariant Hopf
bifurcation. We compute part of a family of standing wave solutions, emanating
from this point.
| [
{
"created": "Fri, 4 Jul 2014 21:24:53 GMT",
"version": "v1"
}
] | 2015-06-22 | [
[
"van Veen",
"Lennaert",
""
],
[
"Green",
"Kevin",
""
]
] | We consider a continuum model of electrical signals in the human cortex, which takes the form of a system of semilinear, hyperbolic partial differential equations for the inhibitory and excitatory membrane potentials and the synaptic inputs. The coupling of these components is represented by sigmoidal and quadratic nonlinearities. We consider these equations on a square domain with periodic boundary conditions, in the vicinity of the primary transition from a stable equilibrium to time-periodic motion through an equivariant Hopf bifurcation. We compute part of a family of standing wave solutions, emanating from this point. |
2205.02365 | Joshua Stevenson | Joshua Stevenson, Barbara Holland, Michael Charleston, Jeremy Sumner | Evaluation of the relative performance of the subflattenings method for
phylogenetic inference | 21 pages, 13 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The algebraic properties of flattenings and subflattenings provide direct
methods for identifying edges in the true phylogeny -- and by extension the
complete tree -- using pattern counts from a sequence alignment. The relatively
small number of possible internal edges among a set of taxa (compared to the
number of binary trees) makes these methods attractive, however more could be
done to evaluate their effectiveness for inferring phylogenetic trees. This is
the case particularly for subflattenings, and our work makes progress in this
area. We introduce software for constructing and evaluating subflattenings for
splits, utilising a number of methods to make computing subflattenings more
tractable. We then present the results of simulations we have performed in
order to compare the effectiveness of subflattenings to that of flattenings in
terms of split score distributions, and susceptibility to possible biases. We
find that subflattenings perform similarly to flattenings in terms of the
distribution of split scores on the trees we examined, but may be less affected
by bias arising from both split size/balance and long branch attraction. These
insights are useful for developing effective algorithms to utilise these tools
for the purpose of inferring phylogenetic trees.
| [
{
"created": "Wed, 4 May 2022 23:57:04 GMT",
"version": "v1"
}
] | 2022-05-06 | [
[
"Stevenson",
"Joshua",
""
],
[
"Holland",
"Barbara",
""
],
[
"Charleston",
"Michael",
""
],
[
"Sumner",
"Jeremy",
""
]
] | The algebraic properties of flattenings and subflattenings provide direct methods for identifying edges in the true phylogeny -- and by extension the complete tree -- using pattern counts from a sequence alignment. The relatively small number of possible internal edges among a set of taxa (compared to the number of binary trees) makes these methods attractive, however more could be done to evaluate their effectiveness for inferring phylogenetic trees. This is the case particularly for subflattenings, and our work makes progress in this area. We introduce software for constructing and evaluating subflattenings for splits, utilising a number of methods to make computing subflattenings more tractable. We then present the results of simulations we have performed in order to compare the effectiveness of subflattenings to that of flattenings in terms of split score distributions, and susceptibility to possible biases. We find that subflattenings perform similarly to flattenings in terms of the distribution of split scores on the trees we examined, but may be less affected by bias arising from both split size/balance and long branch attraction. These insights are useful for developing effective algorithms to utilise these tools for the purpose of inferring phylogenetic trees. |
2403.15685 | Iheanyi Okonko Okonko | David Nwachukwu, Edith Nnenna Oketah, Chineze Helen Ugwu, Hope Chioma
Innocent-Adiele, Chisom Chimbundum Adim, Euslar Nnenna Onu, Ann Onyinyechi
Chukwu, Grace Aghaji Nwankwo, Mary Uche Igwe, Phillip O. Okerentugba and
Iheanyi Omezuruike Okonko | Semi-Quantitative Analysis and Seroepidemiological Evidence of Past
Dengue Virus Infection among HIV-infected patients in Onitsha, Anambra State,
Nigeria | null | null | null | null | q-bio.PE q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Despite its endemic nature as well as the recent outbreaks, information on
the opportunistic DENV in Anambra state has been sparse. This study thus aimed
to give seroepidemiological evidence of past dengue virus infection among
HIV-infected patients in Onitsha, Anambra State, Nigeria. Plasma from 94
HIV-infected patients who were attending Saint Charles Borromeo Hospital,
Onitsha in Anambra State, Nigeria was tested for IgG antibodies specific to the
dengue virus by IgG ELISA assay. The prevalence of past dengue virus infection
was 61.7% (n = 58/94). This study showed age group 0-15 years (77.30%), female
gender (65.1%), married (63.9%) and no formal level (100.0 %) as the highest
seropositivity among the study participants. In terms of immunological and
virological markers, greater IgG seroprevalence was observed in individuals
with a viral load of <40 copies/ml (64.0%) and a CD4 count of >350 cells/ul
(63.2%). The high IgG seropositivity of Dengue Virus (DENV) among HIV-infected
individuals on Onitsha is cause for concern.
| [
{
"created": "Sat, 23 Mar 2024 02:21:50 GMT",
"version": "v1"
}
] | 2024-03-26 | [
[
"Nwachukwu",
"David",
""
],
[
"Oketah",
"Edith Nnenna",
""
],
[
"Ugwu",
"Chineze Helen",
""
],
[
"Innocent-Adiele",
"Hope Chioma",
""
],
[
"Adim",
"Chisom Chimbundum",
""
],
[
"Onu",
"Euslar Nnenna",
""
],
[
"Chukwu",
"Ann Onyinyechi",
""
],
[
"Nwankwo",
"Grace Aghaji",
""
],
[
"Igwe",
"Mary Uche",
""
],
[
"Okerentugba",
"Phillip O.",
""
],
[
"Okonko",
"Iheanyi Omezuruike",
""
]
] | Despite its endemic nature as well as the recent outbreaks, information on the opportunistic DENV in Anambra state has been sparse. This study thus aimed to give seroepidemiological evidence of past dengue virus infection among HIV-infected patients in Onitsha, Anambra State, Nigeria. Plasma from 94 HIV-infected patients who were attending Saint Charles Borromeo Hospital, Onitsha in Anambra State, Nigeria was tested for IgG antibodies specific to the dengue virus by IgG ELISA assay. The prevalence of past dengue virus infection was 61.7% (n = 58/94). This study showed age group 0-15 years (77.30%), female gender (65.1%), married (63.9%) and no formal level (100.0 %) as the highest seropositivity among the study participants. In terms of immunological and virological markers, greater IgG seroprevalence was observed in individuals with a viral load of <40 copies/ml (64.0%) and a CD4 count of >350 cells/ul (63.2%). The high IgG seropositivity of Dengue Virus (DENV) among HIV-infected individuals on Onitsha is cause for concern. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.