id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2205.10563 | Hideaki Yamamoto PhD | Hideaki Yamamoto, F. Paul Spitzner, Taiki Takemuro, Victor Buend\'ia,
Carla Morante, Tomohiro Konno, Shigeo Sato, Ayumi Hirano-Iwata, Viola
Priesemann, Miguel A. Mu\~noz, Johannes Zierenberg, Jordi Soriano | Modular architecture facilitates noise-driven control of synchrony in
neuronal networks | 23 pages, 5 figures | Sci. Adv. 9, eade1755 (2023) | 10.1126/sciadv.ade1755 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Brain functions require both segregated processing of information in
specialized circuits, as well as integration across circuits to perform
high-level information processing. One possible way to implement these
seemingly opposing demands is by flexibly switching between synchronous and
less synchronous states. Understanding how complex synchronization patterns are
controlled by the interaction of network architecture and external
perturbations is thus a central challenge in neuroscience, but the mechanisms
behind such interactions remain elusive. Here, we utilise precision
neuroengineering to manipulate cultured neuronal networks and show that a
modular architecture facilitates desynchronization upon asynchronous
stimulation, making external noise a control parameter of synchrony. Using
spiking neuron models, we then demonstrate that external noise can reduce the
level of available synaptic resources, which make intermodular interactions
more stochastic and thereby facilitates the breakdown of synchrony. Finally,
the phenomenology of stochastic intermodular interactions is formulated into a
mesoscopic model that incorporates a state-dependent gating mechanism for
signal propagation. Taken together, our results demonstrate a network mechanism
by which asynchronous inputs tune the inherent dynamical state in structured
networks of excitable units.
| [
{
"created": "Sat, 21 May 2022 11:05:01 GMT",
"version": "v1"
}
] | 2023-08-29 | [
[
"Yamamoto",
"Hideaki",
""
],
[
"Spitzner",
"F. Paul",
""
],
[
"Takemuro",
"Taiki",
""
],
[
"Buendía",
"Victor",
""
],
[
"Morante",
"Carla",
""
],
[
"Konno",
"Tomohiro",
""
],
[
"Sato",
"Shigeo",
""
],
[... | Brain functions require both segregated processing of information in specialized circuits, as well as integration across circuits to perform high-level information processing. One possible way to implement these seemingly opposing demands is by flexibly switching between synchronous and less synchronous states. Understanding how complex synchronization patterns are controlled by the interaction of network architecture and external perturbations is thus a central challenge in neuroscience, but the mechanisms behind such interactions remain elusive. Here, we utilise precision neuroengineering to manipulate cultured neuronal networks and show that a modular architecture facilitates desynchronization upon asynchronous stimulation, making external noise a control parameter of synchrony. Using spiking neuron models, we then demonstrate that external noise can reduce the level of available synaptic resources, which make intermodular interactions more stochastic and thereby facilitates the breakdown of synchrony. Finally, the phenomenology of stochastic intermodular interactions is formulated into a mesoscopic model that incorporates a state-dependent gating mechanism for signal propagation. Taken together, our results demonstrate a network mechanism by which asynchronous inputs tune the inherent dynamical state in structured networks of excitable units. |
2403.12827 | Manda Riehl | Qiuyun Li, Manda Riehl | Predicting the stability of profiling signals of small RNAs | 14 pages | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Profiling is a process that finds similarities between different RNA
secondary structures by extracting signals from the Boltzmann sampling. The
reproducibility of profiling can be identified by the standard deviation of
number of features among Boltzmann samples. We found a strong relationship
between the frequency of each helix class and its standard deviation of the
frequency upon repeated Boltzmann sampling. We developed a perturbation
technique to predict the stability of these featured helix classes without the
need for repeated Boltzmann sampling, with accuracy between 84% and 94%,
depending on the type of RNA. Our technique only requires 0.2% of the
computation time compared to one profiling process.
| [
{
"created": "Tue, 19 Mar 2024 15:31:54 GMT",
"version": "v1"
}
] | 2024-03-20 | [
[
"Li",
"Qiuyun",
""
],
[
"Riehl",
"Manda",
""
]
] | Profiling is a process that finds similarities between different RNA secondary structures by extracting signals from the Boltzmann sampling. The reproducibility of profiling can be identified by the standard deviation of number of features among Boltzmann samples. We found a strong relationship between the frequency of each helix class and its standard deviation of the frequency upon repeated Boltzmann sampling. We developed a perturbation technique to predict the stability of these featured helix classes without the need for repeated Boltzmann sampling, with accuracy between 84% and 94%, depending on the type of RNA. Our technique only requires 0.2% of the computation time compared to one profiling process. |
2201.12398 | Ryan Renslow | Monee Y. McGrady, Sean M. Colby, Jamie R Nu\~nez, Ryan S. Renslow,
Thomas O. Metz | AI for Chemical Space Gap Filling and Novel Compound Generation | null | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | When considering large sets of molecules, it is helpful to place them in the
context of a "chemical space" - a multidimensional space defined by a set of
descriptors that can be used to visualize and analyze compound grouping as well
as identify regions that might be void of valid structures. The chemical space
of all possible molecules in a given biological or environmental sample can be
vast and largely unexplored, mainly due to current limitations in processing of
'big data' by brute force methods (e.g., enumeration of all possible compounds
in a space). Recent advances in artificial intelligence (AI) have led to
multiple new cheminformatics tools that incorporate AI techniques to
characterize and learn the structure and properties of molecules in order to
generate plausible compounds, thereby contributing to more accessible and
explorable regions of chemical space without the need for brute force methods.
We have used one such tool, a deep-learning software called DarkChem, which
learns a representation of the molecular structure of compounds by compressing
them into a latent space. With DarkChem's design, distance in this latent space
is often associated with compound similarity, making sparse regions interesting
targets for compound generation due to the possibility of generating novel
compounds. In this study, we used 1 million small molecules (less than 1000 Da)
to create a representative chemical space (defined by calculated molecular
properties) of all small molecules. We identified regions with few or no
compounds and investigated their location in DarkChem's latent space. From
these spaces, we generated 694,645 valid molecules, all of which represent
molecules not found in any chemical database to date. These molecules filled
50.8% of the probed empty spaces in molecular property space. Generated
molecules are provided in the supporting information.
| [
{
"created": "Fri, 28 Jan 2022 20:08:24 GMT",
"version": "v1"
}
] | 2022-02-01 | [
[
"McGrady",
"Monee Y.",
""
],
[
"Colby",
"Sean M.",
""
],
[
"Nuñez",
"Jamie R",
""
],
[
"Renslow",
"Ryan S.",
""
],
[
"Metz",
"Thomas O.",
""
]
] | When considering large sets of molecules, it is helpful to place them in the context of a "chemical space" - a multidimensional space defined by a set of descriptors that can be used to visualize and analyze compound grouping as well as identify regions that might be void of valid structures. The chemical space of all possible molecules in a given biological or environmental sample can be vast and largely unexplored, mainly due to current limitations in processing of 'big data' by brute force methods (e.g., enumeration of all possible compounds in a space). Recent advances in artificial intelligence (AI) have led to multiple new cheminformatics tools that incorporate AI techniques to characterize and learn the structure and properties of molecules in order to generate plausible compounds, thereby contributing to more accessible and explorable regions of chemical space without the need for brute force methods. We have used one such tool, a deep-learning software called DarkChem, which learns a representation of the molecular structure of compounds by compressing them into a latent space. With DarkChem's design, distance in this latent space is often associated with compound similarity, making sparse regions interesting targets for compound generation due to the possibility of generating novel compounds. In this study, we used 1 million small molecules (less than 1000 Da) to create a representative chemical space (defined by calculated molecular properties) of all small molecules. We identified regions with few or no compounds and investigated their location in DarkChem's latent space. From these spaces, we generated 694,645 valid molecules, all of which represent molecules not found in any chemical database to date. These molecules filled 50.8% of the probed empty spaces in molecular property space. Generated molecules are provided in the supporting information. |
0901.2159 | Nikesh Dattani | Nikesh S. Dattani | Modeling of neuron-semiconductor interactions in neuronal networks
interfaced with silicon chips | 14 Pages, 4 Figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent developments in the interfacing of neurons with silicon chips may pave
the way for progress in constructing scalable neurocomputers. The assembly of
synthetic neuronal networks with predefined synaptic connections and controlled
geometric structure has been realized experimentally within the last decade.
Furthermore, when such neuronal networks are interfaced with semiconductors,
action potentials in neurons of the network can be elicited by capacitative
stimulators, and voltage measurements can be made by transistors incorporated
into the associated silicon chip. Despite the impressive progress, such
preliminary devices have not yet demonstrated the performance of useful
computations, and constructing larger devices can be both expensive and
time-consuming. Accordingly, an appropriate modeling framework with the
capability to simulate current experimental results in such devices may be used
to make useful predictions regarding their potential computational power. A
proposed modeling framework for functional neuronal networks interfaced with
silicon chips is presented below.
| [
{
"created": "Thu, 15 Jan 2009 01:36:32 GMT",
"version": "v1"
}
] | 2009-01-16 | [
[
"Dattani",
"Nikesh S.",
""
]
] | Recent developments in the interfacing of neurons with silicon chips may pave the way for progress in constructing scalable neurocomputers. The assembly of synthetic neuronal networks with predefined synaptic connections and controlled geometric structure has been realized experimentally within the last decade. Furthermore, when such neuronal networks are interfaced with semiconductors, action potentials in neurons of the network can be elicited by capacitative stimulators, and voltage measurements can be made by transistors incorporated into the associated silicon chip. Despite the impressive progress, such preliminary devices have not yet demonstrated the performance of useful computations, and constructing larger devices can be both expensive and time-consuming. Accordingly, an appropriate modeling framework with the capability to simulate current experimental results in such devices may be used to make useful predictions regarding their potential computational power. A proposed modeling framework for functional neuronal networks interfaced with silicon chips is presented below. |
1911.08676 | Alin Voskanian-Kordi | Alin Voskanian-Kordi, Ashley Funai, Maricel G. Kann | DomainScope: A disease network based on protein domain connections | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein domains are highly conserved functional units of proteins. Because
they carry functionally significant information, the majority of the coding
disease variants are located on domains. Additionally, domains are specific
units of the proteins that can be targeted for drug delivery purposes. Here,
using information about variants sites associated with diseases, a disease
network was built, based on their sharing the same domain and domain variation
site. The result was 49,990 disease pairs linked by domain variant site and
533,687 disease pairs that share the same mutated domain. These pairs were
compared to disease pairs made using previous methods such as gene identity and
gene variant site identity, which revealed that over 8,000 of these pairs were
not only missing from the gene pairings but also not found commonly together in
literature. The disease network was analyzed from their disease subject
categories, which when compared to the gene-based disease network revealed that
the domain method results in higher number of connections across disease
categories versus within a disease category. Further, a study into the drug
repurposing possibilities of the disease network created using domain revealed
that 16,902 of the disease pairs had a drug reported for one disease but not
the other, highlighting the drug repurposing potential of this new methodology.
| [
{
"created": "Wed, 20 Nov 2019 03:06:46 GMT",
"version": "v1"
}
] | 2019-11-21 | [
[
"Voskanian-Kordi",
"Alin",
""
],
[
"Funai",
"Ashley",
""
],
[
"Kann",
"Maricel G.",
""
]
] | Protein domains are highly conserved functional units of proteins. Because they carry functionally significant information, the majority of the coding disease variants are located on domains. Additionally, domains are specific units of the proteins that can be targeted for drug delivery purposes. Here, using information about variants sites associated with diseases, a disease network was built, based on their sharing the same domain and domain variation site. The result was 49,990 disease pairs linked by domain variant site and 533,687 disease pairs that share the same mutated domain. These pairs were compared to disease pairs made using previous methods such as gene identity and gene variant site identity, which revealed that over 8,000 of these pairs were not only missing from the gene pairings but also not found commonly together in literature. The disease network was analyzed from their disease subject categories, which when compared to the gene-based disease network revealed that the domain method results in higher number of connections across disease categories versus within a disease category. Further, a study into the drug repurposing possibilities of the disease network created using domain revealed that 16,902 of the disease pairs had a drug reported for one disease but not the other, highlighting the drug repurposing potential of this new methodology. |
1402.1553 | Mareike Fischer | Mareike Fischer and Steven Kelk | On the Maximum Parsimony distance between phylogenetic trees | 30 pages, 6 figures | null | null | null | q-bio.PE math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Within the field of phylogenetics there is great interest in distance
measures to quantify the dissimilarity of two trees. Here, based on an idea of
Bruen and Bryant, we propose and analyze a new distance measure: the Maximum
Parsimony (MP) distance. This is based on the difference of the parsimony
scores of a single character on both trees under consideration, and the goal is
to find the character which maximizes this difference. In this article we show
that this new distance is a metric and provides a lower bound to the well-known
Subtree Prune and Regraft (SPR) distance. We also show that to compute the MP
distance it is sufficient to consider only characters that are convex on one of
the trees, and prove several additional structural properties of the distance.
On the complexity side, we prove that calculating the MP distance is in general
NP-hard, and identify an interesting island of tractability in which the
distance can be calculated in polynomial time.
| [
{
"created": "Fri, 7 Feb 2014 05:21:58 GMT",
"version": "v1"
}
] | 2014-02-10 | [
[
"Fischer",
"Mareike",
""
],
[
"Kelk",
"Steven",
""
]
] | Within the field of phylogenetics there is great interest in distance measures to quantify the dissimilarity of two trees. Here, based on an idea of Bruen and Bryant, we propose and analyze a new distance measure: the Maximum Parsimony (MP) distance. This is based on the difference of the parsimony scores of a single character on both trees under consideration, and the goal is to find the character which maximizes this difference. In this article we show that this new distance is a metric and provides a lower bound to the well-known Subtree Prune and Regraft (SPR) distance. We also show that to compute the MP distance it is sufficient to consider only characters that are convex on one of the trees, and prove several additional structural properties of the distance. On the complexity side, we prove that calculating the MP distance is in general NP-hard, and identify an interesting island of tractability in which the distance can be calculated in polynomial time. |
1708.00525 | Milena Korostenskaja | Milena Korostenskaja, Christoph Kapeller, Ki H Lee, Christoph Guger,
James Baumgartner, Eduardo M. Castillo | Characterization of cortical motor function and imagery-related cortical
activity: Potential application for prehabilitation | 6 pages, 3 figures; IEEE SMC 2017: IEEE International Conference on
Systems, Man, and Cybernetics | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To minimize functional morbidity associated with brain surgery, new
preventive approaches (also referred to as "prehabilitation") by using
motor-imagery-based computer interfaces (MI-BCIs) can be utilized. To achieve
successful MI-BCI performance for prehabilitation purposes, the characteristics
of an electrocorticographic (ECoG) signal that is associated with overt motor
function ("real movement" - RM) versus covert motor function ("motor imagery" -
MI) need to be determined. In our current study, 5 patients with
pharmacoresistant epilepsy (2 males, average age 25 years, SD 15), undergoing
evaluation for epilepsy surgery participated in both RM and MI tasks. Although
the RM- and MI- related ECoG changes had some common features, they also
differed in a number of ways, such as location, frequency ranges, signal
synchronization and desynchronization. These similarities and differences are
discussed in a view of other neuroimaging studies, including
magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI).
We emphasize the need for inclusion of a broad spectrum of frequencies in ECoG
analysis, when RM- and MI- related activities are concerned.
| [
{
"created": "Tue, 1 Aug 2017 21:29:46 GMT",
"version": "v1"
}
] | 2017-08-03 | [
[
"Korostenskaja",
"Milena",
""
],
[
"Kapeller",
"Christoph",
""
],
[
"Lee",
"Ki H",
""
],
[
"Guger",
"Christoph",
""
],
[
"Baumgartner",
"James",
""
],
[
"Castillo",
"Eduardo M.",
""
]
] | To minimize functional morbidity associated with brain surgery, new preventive approaches (also referred to as "prehabilitation") by using motor-imagery-based computer interfaces (MI-BCIs) can be utilized. To achieve successful MI-BCI performance for prehabilitation purposes, the characteristics of an electrocorticographic (ECoG) signal that is associated with overt motor function ("real movement" - RM) versus covert motor function ("motor imagery" - MI) need to be determined. In our current study, 5 patients with pharmacoresistant epilepsy (2 males, average age 25 years, SD 15), undergoing evaluation for epilepsy surgery participated in both RM and MI tasks. Although the RM- and MI- related ECoG changes had some common features, they also differed in a number of ways, such as location, frequency ranges, signal synchronization and desynchronization. These similarities and differences are discussed in a view of other neuroimaging studies, including magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI). We emphasize the need for inclusion of a broad spectrum of frequencies in ECoG analysis, when RM- and MI- related activities are concerned. |
2404.18660 | Joanna Polanska | Karolina Kulis, Sarah Baatout, Kevin Tabury, Joanna Polanska, Mohammed
Abderrafi Benotmane | Diversity in the radiation-induced transcriptomic temporal response of
mouse brain tissue regions | null | null | null | null | q-bio.NC stat.AP | http://creativecommons.org/licenses/by/4.0/ | A number of studies have indicated a potential association between prenatal
exposure to radiation and late mental disabilities. This is believed to be due
to long-term developmental changes and functional impairment of the central
nervous system following radiation exposure during gestation. This study
conducted a bioinformatics analysis on transcriptomic profiles from mouse brain
tissue prenatally exposed to increasing doses of X-radiation. Gene expression
levels were assessed in different brain regions (cortex, hippocampus,
cerebellum) and collected at different time points (at 1 and 6 months after
birth) for C57BL mice exposed at embryonic day E11 to varying doses of
radiation (0, 0.1 and 1 Gy). This study aimed to elucidate the differences in
response to radiation between different brain regions at different intervals
after birth (1 and 6 months). The data was visualised using a two-dimensional
Uniform Manifold Approximation and Projection (UMAP) projection, and the
influence of the factors was investigated using analysis of variance (ANOVA).
It was observed that gene expression was influenced by each factor (tissue,
time, and dose), although to varying degrees. The gene expression trend within
doses was compared for each tissue, as well as the significant pathways between
tissues at different time intervals. Furthermore, in addition to
radiation-responsive pathways, Cytoscape's functional and network analyses
revealed changes in various pathways related to cognition, which is consistent
with previously published data [1] [2] [3], indicating late behavioural changes
in animals prenatally exposed to radiation.
| [
{
"created": "Mon, 29 Apr 2024 12:46:33 GMT",
"version": "v1"
}
] | 2024-04-30 | [
[
"Kulis",
"Karolina",
""
],
[
"Baatout",
"Sarah",
""
],
[
"Tabury",
"Kevin",
""
],
[
"Polanska",
"Joanna",
""
],
[
"Benotmane",
"Mohammed Abderrafi",
""
]
] | A number of studies have indicated a potential association between prenatal exposure to radiation and late mental disabilities. This is believed to be due to long-term developmental changes and functional impairment of the central nervous system following radiation exposure during gestation. This study conducted a bioinformatics analysis on transcriptomic profiles from mouse brain tissue prenatally exposed to increasing doses of X-radiation. Gene expression levels were assessed in different brain regions (cortex, hippocampus, cerebellum) and collected at different time points (at 1 and 6 months after birth) for C57BL mice exposed at embryonic day E11 to varying doses of radiation (0, 0.1 and 1 Gy). This study aimed to elucidate the differences in response to radiation between different brain regions at different intervals after birth (1 and 6 months). The data was visualised using a two-dimensional Uniform Manifold Approximation and Projection (UMAP) projection, and the influence of the factors was investigated using analysis of variance (ANOVA). It was observed that gene expression was influenced by each factor (tissue, time, and dose), although to varying degrees. The gene expression trend within doses was compared for each tissue, as well as the significant pathways between tissues at different time intervals. Furthermore, in addition to radiation-responsive pathways, Cytoscape's functional and network analyses revealed changes in various pathways related to cognition, which is consistent with previously published data [1] [2] [3], indicating late behavioural changes in animals prenatally exposed to radiation. |
2302.03471 | Jack Dekker | Jack Dekker | How a simple chloroplast psbA gene mutation changed world agriculture | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Atrazine as a weed control tactic profoundly changed world agriculture.
Long-term use revealed resistant biotypes, R, with a single base pair mutation
of the chloroplast psbA gene. The R phenotype emerged from a sequential cascade
of pleiotropic effects from the plastid to the whole plant. This reorganization
of the R biotype revealed photosynthetic regulation at different levels of
plant organization. The environment affected R plant productivity differently
than in the susceptible, S, biotype. A consistent, differential, pattern of
photosynthesis was observed between R and S over the diurnal light period.
Photosynthetic superiority of a biotype was a function of the time of day,
plant age temperature. Under highly favorable environmental conditions S often
had the advantage over R. Under less favorable, stressful, conditions R can be
at an advantage over S. Pleiotropic reorganization revealed a sun-air-leaf
Shannon communication system, providing insights into the complex interaction
of chloroplast components in photosynthetic regulation. Altered plastid
thylakoid and stomatal function regulate how the R leaf utilizes the sun-air
environment. Movement of sun-air messages demonstrated how sunlight and air are
modified to a usable message for carbon fixation. These insights showed how
agriculture changed weed populations, and how resistant weed populations
changed agriculture. These changes changed herbicide resistance: introduction
of herbicide resistant crops, HRC. The development of HRCs extended the
evolutionary reach of R weeds. R weed biotypes were naturally selected in these
introduced HRCs: an evolutionary spiral of human technology extended the
phenotypic reach of R biotypes.
| [
{
"created": "Tue, 7 Feb 2023 13:55:28 GMT",
"version": "v1"
}
] | 2023-02-08 | [
[
"Dekker",
"Jack",
""
]
] | Atrazine as a weed control tactic profoundly changed world agriculture. Long-term use revealed resistant biotypes, R, with a single base pair mutation of the chloroplast psbA gene. The R phenotype emerged from a sequential cascade of pleiotropic effects from the plastid to the whole plant. This reorganization of the R biotype revealed photosynthetic regulation at different levels of plant organization. The environment affected R plant productivity differently than in the susceptible, S, biotype. A consistent, differential, pattern of photosynthesis was observed between R and S over the diurnal light period. Photosynthetic superiority of a biotype was a function of the time of day, plant age temperature. Under highly favorable environmental conditions S often had the advantage over R. Under less favorable, stressful, conditions R can be at an advantage over S. Pleiotropic reorganization revealed a sun-air-leaf Shannon communication system, providing insights into the complex interaction of chloroplast components in photosynthetic regulation. Altered plastid thylakoid and stomatal function regulate how the R leaf utilizes the sun-air environment. Movement of sun-air messages demonstrated how sunlight and air are modified to a usable message for carbon fixation. These insights showed how agriculture changed weed populations, and how resistant weed populations changed agriculture. These changes changed herbicide resistance: introduction of herbicide resistant crops, HRC. The development of HRCs extended the evolutionary reach of R weeds. R weed biotypes were naturally selected in these introduced HRCs: an evolutionary spiral of human technology extended the phenotypic reach of R biotypes. |
2306.13769 | Haitao Lin | Haitao Lin, Yufei Huang, Odin Zhang, Lirong Wu, Siyuan Li, Zhiyuan
Chen, Stan Z. Li | Functional-Group-Based Diffusion for Pocket-Specific Molecule Generation
and Elaboration | 9 pages | null | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by/4.0/ | In recent years, AI-assisted drug design methods have been proposed to
generate molecules given the pockets' structures of target proteins. Most of
them are atom-level-based methods, which consider atoms as basic components and
generate atom positions and types. In this way, however, it is hard to generate
realistic fragments with complicated structures. To solve this, we propose
D3FG, a functional-group-based diffusion model for pocket-specific molecule
generation and elaboration. D3FG decomposes molecules into two categories of
components: functional groups defined as rigid bodies and linkers as mass
points. And the two kinds of components can together form complicated fragments
that enhance ligand-protein interactions.
To be specific, in the diffusion process, D3FG diffuses the data distribution
of the positions, orientations, and types of the components into a prior
distribution; In the generative process, the noise is gradually removed from
the three variables by denoisers parameterized with designed equivariant graph
neural networks. In the experiments, our method can generate molecules with
more realistic 3D structures, competitive affinities toward the protein
targets, and better drug properties. Besides, D3FG as a solution to a new task
of molecule elaboration, could generate molecules with high affinities based on
existing ligands and the hotspots of target proteins.
| [
{
"created": "Tue, 30 May 2023 06:41:20 GMT",
"version": "v1"
},
{
"created": "Mon, 23 Oct 2023 13:45:06 GMT",
"version": "v2"
},
{
"created": "Mon, 18 Mar 2024 17:29:42 GMT",
"version": "v3"
}
] | 2024-03-19 | [
[
"Lin",
"Haitao",
""
],
[
"Huang",
"Yufei",
""
],
[
"Zhang",
"Odin",
""
],
[
"Wu",
"Lirong",
""
],
[
"Li",
"Siyuan",
""
],
[
"Chen",
"Zhiyuan",
""
],
[
"Li",
"Stan Z.",
""
]
] | In recent years, AI-assisted drug design methods have been proposed to generate molecules given the pockets' structures of target proteins. Most of them are atom-level-based methods, which consider atoms as basic components and generate atom positions and types. In this way, however, it is hard to generate realistic fragments with complicated structures. To solve this, we propose D3FG, a functional-group-based diffusion model for pocket-specific molecule generation and elaboration. D3FG decomposes molecules into two categories of components: functional groups defined as rigid bodies and linkers as mass points. And the two kinds of components can together form complicated fragments that enhance ligand-protein interactions. To be specific, in the diffusion process, D3FG diffuses the data distribution of the positions, orientations, and types of the components into a prior distribution; In the generative process, the noise is gradually removed from the three variables by denoisers parameterized with designed equivariant graph neural networks. In the experiments, our method can generate molecules with more realistic 3D structures, competitive affinities toward the protein targets, and better drug properties. Besides, D3FG as a solution to a new task of molecule elaboration, could generate molecules with high affinities based on existing ligands and the hotspots of target proteins. |
2010.00962 | Raphael Wittkowski | Michael te Vrugt, Jens Bickmann, Raphael Wittkowski | Containing a pandemic: Nonpharmaceutical interventions and the "second
wave" | 20 pages, 4 figures | Journal of Physics Communications 5, 055008 (2021) | 10.1088/2399-6528/abf79f | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In response to the worldwide outbreak of the coronavirus disease COVID-19, a
variety of nonpharmaceutical interventions such as face masks and social
distancing have been implemented. A careful assessment of the effects of such
containment strategies is required to avoid exceeding social and economical
costs as well as a dangerous "second wave" of the pandemic. In this work, we
combine a recently developed dynamical density functional theory model and an
extended SIRD model with hysteresis to study effects of various measures and
strategies using realistic parameters. Depending on intervention thresholds, a
variety of phases with different numbers of shutdowns and deaths are found.
Spatiotemporal simulations provide further insights into the dynamics of a
second wave. Our results are of crucial importance for public health policy.
| [
{
"created": "Wed, 30 Sep 2020 20:30:54 GMT",
"version": "v1"
}
] | 2021-05-18 | [
[
"Vrugt",
"Michael te",
""
],
[
"Bickmann",
"Jens",
""
],
[
"Wittkowski",
"Raphael",
""
]
] | In response to the worldwide outbreak of the coronavirus disease COVID-19, a variety of nonpharmaceutical interventions such as face masks and social distancing have been implemented. A careful assessment of the effects of such containment strategies is required to avoid exceeding social and economical costs as well as a dangerous "second wave" of the pandemic. In this work, we combine a recently developed dynamical density functional theory model and an extended SIRD model with hysteresis to study effects of various measures and strategies using realistic parameters. Depending on intervention thresholds, a variety of phases with different numbers of shutdowns and deaths are found. Spatiotemporal simulations provide further insights into the dynamics of a second wave. Our results are of crucial importance for public health policy. |
1604.00167 | Wolfram Liebermeister | Elad Noor, Avi Flamholz, Arren Bar-Even, Dan Davidi, Ron Milo, Wolfram
Liebermeister | The protein cost of metabolic fluxes: prediction from enzymatic rate
laws and cost minimization | null | null | 10.1371/journal.pcbi.1005167 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bacterial growth depends crucially on metabolic fluxes, which are limited by
the cell's capacity to maintain metabolic enzymes. The necessary enzyme amount
per unit flux is a major determinant of metabolic strategies both in evolution
and bioengineering. It depends on enzyme parameters (such as kcat and KM
constants), but also on metabolite concentrations. Moreover, similar amounts of
different enzymes might incur different costs for the cell, depending on
enzyme-specific properties such as protein size and half-life. Here, we
developed enzyme cost minimization (ECM), a scalable method for computing
enzyme amounts that support a given metabolic flux at a minimal protein cost.
The complex interplay of enzyme and metabolite concentrations, e.g. through
thermodynamic driving forces and enzyme saturation, would make it hard to solve
this optimization problem directly. By treating enzyme cost as a function of
metabolite levels, we formulated ECM as a numerically tractable, convex
optimization problem. Its tiered approach allows for building models at
different levels of detail, depending on the amount of available data.
Validating our method with measured metabolite and protein levels in E. coli
central metabolism, we found typical prediction fold errors of 3.8 and 2.7,
respectively, for the two kinds of data. ECM can be used to predict enzyme
levels and protein cost in natural and engineered pathways, establishes a
direct connection between protein cost and thermodynamics, and provides a
physically plausible and computationally tractable way to include enzyme
kinetics into constraint-based metabolic models, where kinetics have usually
been ignored or oversimplified.
| [
{
"created": "Fri, 1 Apr 2016 08:48:32 GMT",
"version": "v1"
}
] | 2017-02-08 | [
[
"Noor",
"Elad",
""
],
[
"Flamholz",
"Avi",
""
],
[
"Bar-Even",
"Arren",
""
],
[
"Davidi",
"Dan",
""
],
[
"Milo",
"Ron",
""
],
[
"Liebermeister",
"Wolfram",
""
]
] | Bacterial growth depends crucially on metabolic fluxes, which are limited by the cell's capacity to maintain metabolic enzymes. The necessary enzyme amount per unit flux is a major determinant of metabolic strategies both in evolution and bioengineering. It depends on enzyme parameters (such as kcat and KM constants), but also on metabolite concentrations. Moreover, similar amounts of different enzymes might incur different costs for the cell, depending on enzyme-specific properties such as protein size and half-life. Here, we developed enzyme cost minimization (ECM), a scalable method for computing enzyme amounts that support a given metabolic flux at a minimal protein cost. The complex interplay of enzyme and metabolite concentrations, e.g. through thermodynamic driving forces and enzyme saturation, would make it hard to solve this optimization problem directly. By treating enzyme cost as a function of metabolite levels, we formulated ECM as a numerically tractable, convex optimization problem. Its tiered approach allows for building models at different levels of detail, depending on the amount of available data. Validating our method with measured metabolite and protein levels in E. coli central metabolism, we found typical prediction fold errors of 3.8 and 2.7, respectively, for the two kinds of data. ECM can be used to predict enzyme levels and protein cost in natural and engineered pathways, establishes a direct connection between protein cost and thermodynamics, and provides a physically plausible and computationally tractable way to include enzyme kinetics into constraint-based metabolic models, where kinetics have usually been ignored or oversimplified. |
1210.5779 | Anyou Wang | Anyou Wang | A quantitative system for discriminating induced pluripotent stem cells,
embryonic stem cells and somatic cells | null | null | 10.1371/journal.pone.0056095 | PLoS ONE 8(2): e56095 | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Embryonic stem cells (ESCs) and induced pluripotent stem cells (iPSCs)
derived from somatic cells (SCs) provide promising resources for regenerative
medicine and medical research, leading to a daily identification of new cell
lines. However, an efficient system to discriminate the cell lines is lacking.
Here, we developed a quantitative system to discriminate the three cell types,
iPSCs, ESCs and SCs. The system contains DNA-methylation biomarkers and
mathematical models, including an artificial neural network and support vector
machines. All biomarkers were unbiasedly selected by calculating an eigengene
score derived from analysis of genome-wide DNA methylations. With 30
biomarkers, or even with as few as 3 top biomarkers, this system can
discriminate SCs from ESCs and iPSCs with almost 100% accuracy, and with
approximately 100 biomarkers, the system can distinguish ESCs from iPSCs with
an accuracy of 95%. This robust system performs precisely with raw data without
normalization as well as with converted data in which the continuous
methylation levels are accounted. Strikingly, this system can even accurately
predict new samples generated from different microarray platforms and the
next-generation sequencing. The subtypes of cells, such as female and male
iPSCs and fetal and adult SCs, can also be discriminated with this system.
Thus, this quantitative system works as a novel general and accurate framework
for discriminating the three cell types, iPSCs, ESCs, and SCs and this strategy
supports the notion that DNA-methylation generally varies among the three cell
types.
| [
{
"created": "Sun, 21 Oct 2012 23:30:40 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Dec 2012 02:00:26 GMT",
"version": "v2"
}
] | 2013-02-15 | [
[
"Wang",
"Anyou",
""
]
] | Embryonic stem cells (ESCs) and induced pluripotent stem cells (iPSCs) derived from somatic cells (SCs) provide promising resources for regenerative medicine and medical research, leading to a daily identification of new cell lines. However, an efficient system to discriminate the cell lines is lacking. Here, we developed a quantitative system to discriminate the three cell types, iPSCs, ESCs and SCs. The system contains DNA-methylation biomarkers and mathematical models, including an artificial neural network and support vector machines. All biomarkers were unbiasedly selected by calculating an eigengene score derived from analysis of genome-wide DNA methylations. With 30 biomarkers, or even with as few as 3 top biomarkers, this system can discriminate SCs from ESCs and iPSCs with almost 100% accuracy, and with approximately 100 biomarkers, the system can distinguish ESCs from iPSCs with an accuracy of 95%. This robust system performs precisely with raw data without normalization as well as with converted data in which the continuous methylation levels are accounted. Strikingly, this system can even accurately predict new samples generated from different microarray platforms and the next-generation sequencing. The subtypes of cells, such as female and male iPSCs and fetal and adult SCs, can also be discriminated with this system. Thus, this quantitative system works as a novel general and accurate framework for discriminating the three cell types, iPSCs, ESCs, and SCs and this strategy supports the notion that DNA-methylation generally varies among the three cell types. |
1912.01505 | Shu-Chuan Chen | Shu-Chuan Chen, Lung-An Li, and Jiping He | An integrated heterogeneous Poisson model for neuron functions in hand
movement during reaching and grasp | null | null | null | null | q-bio.NC stat.AP stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To understand potential encoding mechanism of motor cortical neurons for
control commands during reach-to-grasp movements, experiments to record
neuronal activities from primary motor cortical regions have been conducted in
many research laboratories (for example, (7), (17)). The most popular approach
in neuroscience community is to fit the Analysis of Variance (ANOVA) model
using the firing rates of individual neurons. In addition to consider neural
firing counts but also temporal intervals, (5) proposed to apply Analysis of
Covariance (ANCOVA) model. Due to the nature of the data, in this paper we
propose to apply an integrated method, called heterogeneous Poisson regression
model, to categorize different neural activities. Three scenarios are discussed
to show that the proposed heterogeneous Poisson regression model can overcome
some disadvantages of the traditional Poisson regression model.
| [
{
"created": "Wed, 27 Nov 2019 07:30:55 GMT",
"version": "v1"
}
] | 2019-12-04 | [
[
"Chen",
"Shu-Chuan",
""
],
[
"Li",
"Lung-An",
""
],
[
"He",
"Jiping",
""
]
] | To understand potential encoding mechanism of motor cortical neurons for control commands during reach-to-grasp movements, experiments to record neuronal activities from primary motor cortical regions have been conducted in many research laboratories (for example, (7), (17)). The most popular approach in neuroscience community is to fit the Analysis of Variance (ANOVA) model using the firing rates of individual neurons. In addition to consider neural firing counts but also temporal intervals, (5) proposed to apply Analysis of Covariance (ANCOVA) model. Due to the nature of the data, in this paper we propose to apply an integrated method, called heterogeneous Poisson regression model, to categorize different neural activities. Three scenarios are discussed to show that the proposed heterogeneous Poisson regression model can overcome some disadvantages of the traditional Poisson regression model. |
0907.0759 | Michal Komorowski | Michal Komorowski, Barbel Finkenstadt, Claire V. Harper, David A. Rand | Bayesian inference of biochemical kinetic parameters using the linear
noise approximation | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fluorescent and luminescent gene reporters allow us to dynamically quantify
changes in molecular species concentration over time on the single cell level.
The mathematical modeling of their interaction through multivariate dynamical
models requires the development of effective statistical methods to calibrate
such models against available data. Given the prevalence of stochasticity and
noise in biochemical systems inference for stochastic models is of special
interest. In this paper we present a simple and computationally efficient
algorithm for the estimation of biochemical kinetic parameters from gene
reporter data. We use the linear noise approximation to model biochemical
reactions through a stochastic dynamic model which essentially approximates a
diffusion model by an ordinary differential equation model with an
appropriately defined noise process. An explicit formula for the likelihood
function can be derived allowing for computationally efficient parameter
estimation. The proposed algorithm is embedded in a Bayesian framework and
inference is performed using Markov chain Monte Carlo. The major advantage of
the method is that in contrast to the more established diffusion approximation
based methods the computationally costly methods of data augmentation are not
necessary. Our approach also allows for unobserved variables and measurement
error. The application of the method to both simulated and experimental data
shows that the proposed methodology provides a useful alternative to diffusion
approximation based methods.
| [
{
"created": "Sat, 4 Jul 2009 13:37:18 GMT",
"version": "v1"
}
] | 2009-07-07 | [
[
"Komorowski",
"Michal",
""
],
[
"Finkenstadt",
"Barbel",
""
],
[
"Harper",
"Claire V.",
""
],
[
"Rand",
"David A.",
""
]
] | Fluorescent and luminescent gene reporters allow us to dynamically quantify changes in molecular species concentration over time on the single cell level. The mathematical modeling of their interaction through multivariate dynamical models requires the development of effective statistical methods to calibrate such models against available data. Given the prevalence of stochasticity and noise in biochemical systems inference for stochastic models is of special interest. In this paper we present a simple and computationally efficient algorithm for the estimation of biochemical kinetic parameters from gene reporter data. We use the linear noise approximation to model biochemical reactions through a stochastic dynamic model which essentially approximates a diffusion model by an ordinary differential equation model with an appropriately defined noise process. An explicit formula for the likelihood function can be derived allowing for computationally efficient parameter estimation. The proposed algorithm is embedded in a Bayesian framework and inference is performed using Markov chain Monte Carlo. The major advantage of the method is that in contrast to the more established diffusion approximation based methods the computationally costly methods of data augmentation are not necessary. Our approach also allows for unobserved variables and measurement error. The application of the method to both simulated and experimental data shows that the proposed methodology provides a useful alternative to diffusion approximation based methods. |
0806.1685 | Vladislav Volman | Vladislav Volman and Herbert Levine | Activity-dependent stochastic resonance in recurrent neuronal networks | 4 pages, 4 figures, published in Physical Review E | null | 10.1103/PhysRevE.77.060903 | null | q-bio.NC q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We use a biophysical model of a local neuronal circuit to study the
implications of synaptic plasticity for the detection of weak sensory stimuli.
Networks with fast plastic coupling show behavior consistent with stochastic
resonance. Addition of an additional slow coupling that accounts for the
asynchronous release of neurotransmitter results in qualitatively different
properties of signal detection, and also leads to the appearance of transient
post-stimulus bistability. Our results suggest testable hypothesis with regard
to the self-organization and dynamics of local neuronal circuits.
| [
{
"created": "Tue, 10 Jun 2008 15:04:03 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Volman",
"Vladislav",
""
],
[
"Levine",
"Herbert",
""
]
] | We use a biophysical model of a local neuronal circuit to study the implications of synaptic plasticity for the detection of weak sensory stimuli. Networks with fast plastic coupling show behavior consistent with stochastic resonance. Addition of an additional slow coupling that accounts for the asynchronous release of neurotransmitter results in qualitatively different properties of signal detection, and also leads to the appearance of transient post-stimulus bistability. Our results suggest testable hypothesis with regard to the self-organization and dynamics of local neuronal circuits. |
2201.05612 | Bernard Auriol | B.M. Auriol, B. Auriol, J. B\'eard, B. Bib\'e, J.-M. Broto, D.F.
Descouens, L.J.S. Durand, J.-P. Florens, F. Garcia, C. Gillieaux, E.G.
Joiner, B. Libes, P. Pergent, R. Ruiz, C. Thalamas | Overt and covert paths for sound in the auditory system of mammals. 2 | null | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | Current scientific consensus holds that sound is transmitted, solely
mechanically, from the tympanum to the cochlea via ossicles. However this
theory does not explain the hearing extreme quality regarding high frequencies
in mammals. So, we propose a bioelectronic pathway (the covert path) that is
complementary to the overt path. We demonstrate experimentally that the
tympanum produces piezoelectric potentials isochronous to acoustic vibrations
thanks to its collagen fibers and that their amplitude increases along with the
frequency and level of the vibrations. This finding supports the existence of
an electrical pathway, specialized in transmitting high-frequency sounds that
works in unison with the mechanical pathway. A bio-organic triode, similar to a
field effect transistor, is the key mechanism of our hypothesized pathway. We
present evidence that any deficiency along this pathway produces hearing
impairment. By augmenting the classical theory of sound transmission, our
discovery offers new perspectives for research into both normal and
pathological audition and may contribute to an understanding of genetic and
physiological problems of hearing.
| [
{
"created": "Fri, 14 Jan 2022 15:30:24 GMT",
"version": "v1"
}
] | 2022-01-19 | [
[
"Auriol",
"B. M.",
""
],
[
"Auriol",
"B.",
""
],
[
"Béard",
"J.",
""
],
[
"Bibé",
"B.",
""
],
[
"Broto",
"J. -M.",
""
],
[
"Descouens",
"D. F.",
""
],
[
"Durand",
"L. J. S.",
""
],
[
"Florens",
... | Current scientific consensus holds that sound is transmitted, solely mechanically, from the tympanum to the cochlea via ossicles. However this theory does not explain the hearing extreme quality regarding high frequencies in mammals. So, we propose a bioelectronic pathway (the covert path) that is complementary to the overt path. We demonstrate experimentally that the tympanum produces piezoelectric potentials isochronous to acoustic vibrations thanks to its collagen fibers and that their amplitude increases along with the frequency and level of the vibrations. This finding supports the existence of an electrical pathway, specialized in transmitting high-frequency sounds that works in unison with the mechanical pathway. A bio-organic triode, similar to a field effect transistor, is the key mechanism of our hypothesized pathway. We present evidence that any deficiency along this pathway produces hearing impairment. By augmenting the classical theory of sound transmission, our discovery offers new perspectives for research into both normal and pathological audition and may contribute to an understanding of genetic and physiological problems of hearing. |
1807.04245 | Laurence Yang | Laurence Yang, Michael A. Saunders, Jean-Christophe Lachance, Bernhard
O. Palsson, Jos\'e Bento | Estimating Cellular Goals from High-Dimensional Biological Data | null | null | 10.1145/3292500.3330775 | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Optimization-based models have been used to predict cellular behavior for
over 25 years. The constraints in these models are derived from genome
annotations, measured macro-molecular composition of cells, and by measuring
the cell's growth rate and metabolism in different conditions. The cellular
goal (the optimization problem that the cell is trying to solve) can be
challenging to derive experimentally for many organisms, including human or
mammalian cells, which have complex metabolic capabilities and are not well
understood. Existing approaches to learning goals from data include (a)
estimating a linear objective function, or (b) estimating linear constraints
that model complex biochemical reactions and constrain the cell's operation.
The latter approach is important because often the known/observed biochemical
reactions are not enough to explain observations, and hence there is a need to
extend automatically the model complexity by learning new chemical reactions.
However, this leads to nonconvex optimization problems, and existing tools
cannot scale to realistically large metabolic models. Hence, constraint
estimation is still used sparingly despite its benefits for modeling cell
metabolism, which is important for developing novel antimicrobials against
pathogens, discovering cancer drug targets, and producing value-added
chemicals. Here, we develop the first approach to estimating constraint
reactions from data that can scale to realistically large metabolic models.
Previous tools have been used on problems having less than 75 biochemical
reactions and 60 metabolites, which limits real-life-size applications. We
perform extensive experiments using 75 large-scale metabolic network models for
different organisms (including bacteria, yeasts, and mammals) and show that our
algorithm can recover cellular constraint reactions, even when some
measurements are missing.
| [
{
"created": "Wed, 11 Jul 2018 16:57:57 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Oct 2018 20:44:09 GMT",
"version": "v2"
},
{
"created": "Mon, 11 Feb 2019 09:00:30 GMT",
"version": "v3"
},
{
"created": "Mon, 20 May 2019 05:10:34 GMT",
"version": "v4"
}
] | 2019-05-21 | [
[
"Yang",
"Laurence",
""
],
[
"Saunders",
"Michael A.",
""
],
[
"Lachance",
"Jean-Christophe",
""
],
[
"Palsson",
"Bernhard O.",
""
],
[
"Bento",
"José",
""
]
] | Optimization-based models have been used to predict cellular behavior for over 25 years. The constraints in these models are derived from genome annotations, measured macro-molecular composition of cells, and by measuring the cell's growth rate and metabolism in different conditions. The cellular goal (the optimization problem that the cell is trying to solve) can be challenging to derive experimentally for many organisms, including human or mammalian cells, which have complex metabolic capabilities and are not well understood. Existing approaches to learning goals from data include (a) estimating a linear objective function, or (b) estimating linear constraints that model complex biochemical reactions and constrain the cell's operation. The latter approach is important because often the known/observed biochemical reactions are not enough to explain observations, and hence there is a need to extend automatically the model complexity by learning new chemical reactions. However, this leads to nonconvex optimization problems, and existing tools cannot scale to realistically large metabolic models. Hence, constraint estimation is still used sparingly despite its benefits for modeling cell metabolism, which is important for developing novel antimicrobials against pathogens, discovering cancer drug targets, and producing value-added chemicals. Here, we develop the first approach to estimating constraint reactions from data that can scale to realistically large metabolic models. Previous tools have been used on problems having less than 75 biochemical reactions and 60 metabolites, which limits real-life-size applications. We perform extensive experiments using 75 large-scale metabolic network models for different organisms (including bacteria, yeasts, and mammals) and show that our algorithm can recover cellular constraint reactions, even when some measurements are missing. |
q-bio/0312016 | Pengliang Shi | Pengliang Shi and Michael Small | Modelling of SARS for Hong Kong | 6 pages, 8 figures | null | null | null | q-bio.PE | null | A simplified susceptible-infected-recovered (SIR) epidemic model and a
small-world model are applied to analyse the spread and control of Severe Acute
Respiratory Syndrome (SARS) for Hong Kong in early 2003. From data available in
mid April 2003, we predict that SARS would be controlled by June and nearly
1700 persons would be infected based on the SIR model. This is consistent with
the known data. A simple way to evaluate the development and efficacy of
control is described and shown to provide a useful measure for the future
evolution of an epidemic. This may contribute to improve strategic response
from the government. The evaluation process here is universal and therefore
applicable to many similar homogeneous epidemic diseases within a fixed
population. A novel model consisting of map systems involving the Small-World
network principle is also described. We find that this model reproduces
qualitative features of the random disease propagation observed in the true
data. Unlike traditional deterministic models, scale-free phenomena are
observed in the epidemic network. The numerical simulations provide theoretical
support for current strategies and achieve more efficient control of some
epidemic diseases, including SARS.
| [
{
"created": "Thu, 11 Dec 2003 21:24:25 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Shi",
"Pengliang",
""
],
[
"Small",
"Michael",
""
]
] | A simplified susceptible-infected-recovered (SIR) epidemic model and a small-world model are applied to analyse the spread and control of Severe Acute Respiratory Syndrome (SARS) for Hong Kong in early 2003. From data available in mid April 2003, we predict that SARS would be controlled by June and nearly 1700 persons would be infected based on the SIR model. This is consistent with the known data. A simple way to evaluate the development and efficacy of control is described and shown to provide a useful measure for the future evolution of an epidemic. This may contribute to improve strategic response from the government. The evaluation process here is universal and therefore applicable to many similar homogeneous epidemic diseases within a fixed population. A novel model consisting of map systems involving the Small-World network principle is also described. We find that this model reproduces qualitative features of the random disease propagation observed in the true data. Unlike traditional deterministic models, scale-free phenomena are observed in the epidemic network. The numerical simulations provide theoretical support for current strategies and achieve more efficient control of some epidemic diseases, including SARS. |
1509.02667 | Juliette Hell | Juliette Hell and Alan D. Rendall | Sustained oscillations in the MAP kinase cascade | null | null | null | null | q-bio.MN math.CA math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The MAP kinase cascade is a network of enzymatic reactions arranged in
layers. In each layer occurs a multiple futile cycle of phosphorylations. The
fully phosphorylated substrate then serves as an enzyme for the layer below.
This papers focusses on the existence of parameters for which Hopf bifurcations
occur and generate periodic orbits. Furthermore it is explained how geometric
singular perturbation theory allows to generalize results from simple models to
more complex ones.
| [
{
"created": "Wed, 9 Sep 2015 07:57:38 GMT",
"version": "v1"
}
] | 2015-09-10 | [
[
"Hell",
"Juliette",
""
],
[
"Rendall",
"Alan D.",
""
]
] | The MAP kinase cascade is a network of enzymatic reactions arranged in layers. In each layer occurs a multiple futile cycle of phosphorylations. The fully phosphorylated substrate then serves as an enzyme for the layer below. This papers focusses on the existence of parameters for which Hopf bifurcations occur and generate periodic orbits. Furthermore it is explained how geometric singular perturbation theory allows to generalize results from simple models to more complex ones. |
2405.00751 | Shaoning Li | Shaoning Li, Yusong Wang, Mingyu Li, Jian Zhang, Bin Shao, Nanning
Zheng, Jian Tang | F$^3$low: Frame-to-Frame Coarse-grained Molecular Dynamics with SE(3)
Guided Flow Matching | Accepted by ICLR 2024 GEM workshop | null | null | null | q-bio.QM cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Molecular dynamics (MD) is a crucial technique for simulating biological
systems, enabling the exploration of their dynamic nature and fostering an
understanding of their functions and properties. To address exploration
inefficiency, emerging enhanced sampling approaches like coarse-graining (CG)
and generative models have been employed. In this work, we propose a
\underline{Frame-to-Frame} generative model with guided
\underline{Flow}-matching (F$3$low) for enhanced sampling, which (a) extends
the domain of CG modeling to the SE(3) Riemannian manifold; (b) retreating CGMD
simulations as autoregressively sampling guided by the former frame via
flow-matching models; (c) targets the protein backbone, offering improved
insights into secondary structure formation and intricate folding pathways.
Compared to previous methods, F$3$low allows for broader exploration of
conformational space. The ability to rapidly generate diverse conformations via
force-free generative paradigm on SE(3) paves the way toward efficient enhanced
sampling methods.
| [
{
"created": "Wed, 1 May 2024 04:53:14 GMT",
"version": "v1"
}
] | 2024-05-03 | [
[
"Li",
"Shaoning",
""
],
[
"Wang",
"Yusong",
""
],
[
"Li",
"Mingyu",
""
],
[
"Zhang",
"Jian",
""
],
[
"Shao",
"Bin",
""
],
[
"Zheng",
"Nanning",
""
],
[
"Tang",
"Jian",
""
]
] | Molecular dynamics (MD) is a crucial technique for simulating biological systems, enabling the exploration of their dynamic nature and fostering an understanding of their functions and properties. To address exploration inefficiency, emerging enhanced sampling approaches like coarse-graining (CG) and generative models have been employed. In this work, we propose a \underline{Frame-to-Frame} generative model with guided \underline{Flow}-matching (F$3$low) for enhanced sampling, which (a) extends the domain of CG modeling to the SE(3) Riemannian manifold; (b) retreating CGMD simulations as autoregressively sampling guided by the former frame via flow-matching models; (c) targets the protein backbone, offering improved insights into secondary structure formation and intricate folding pathways. Compared to previous methods, F$3$low allows for broader exploration of conformational space. The ability to rapidly generate diverse conformations via force-free generative paradigm on SE(3) paves the way toward efficient enhanced sampling methods. |
2202.05731 | Sitabhra Sinha | Chandrashekar Kuyyamudi, Shakti N. Menon, Sitabhra Sinha | Flags, Landscapes and Signaling: Contact-mediated inter-cellular
interactions enable plasticity in fate determination driven by positional
information | 10 pages, 6 figures | null | 10.1007/s12648-022-02348-6 | null | q-bio.TO nlin.PS physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multicellular organisms exhibit a high degree of structural organization with
specific cell types always occurring in characteristic locations. The
conventional framework for describing the emergence of such consistent spatial
patterns is provided by Wolpert's "French flag" paradigm. According to this
view, intra-cellular genetic regulatory mechanisms use positional information
provided by morphogen concentration gradients to differentially express
distinct fates, resulting in a characteristic pattern of differentiated cells.
However, recent experiments have shown that suppression of inter-cellular
interactions can alter these spatial patterns, suggesting that cell fates are
not exclusively determined by the regulation of gene expression by local
morphogen concentration. Using an explicit model where adjacent cells
communicate by Notch signaling, we provide a mechanistic description of how
contact-mediated interactions allow information from the cellular environment
to be incorporated into cell fate decisions. Viewing cellular differentiation
in terms of trajectories along an epigenetic landscape (as first enunciated by
Waddington), our results suggest that the contours of the landscape are moulded
differently in a cell position-dependent manner, not only by the global signal
provided by the morphogen but also by the local environment via cell-cell
interactions. We show that our results are robust with respect to different
choices of coupling between the inter-cellular signaling apparatus and the
intra-cellular gene regulatory dynamics. Indeed, we show that the broad
features can be observed even in abstract spin models. Our work reconciles
interaction-mediated self-organized pattern formation with boundary-organized
mechanisms involving signals that break symmetry.
| [
{
"created": "Fri, 11 Feb 2022 16:07:59 GMT",
"version": "v1"
}
] | 2022-05-11 | [
[
"Kuyyamudi",
"Chandrashekar",
""
],
[
"Menon",
"Shakti N.",
""
],
[
"Sinha",
"Sitabhra",
""
]
] | Multicellular organisms exhibit a high degree of structural organization with specific cell types always occurring in characteristic locations. The conventional framework for describing the emergence of such consistent spatial patterns is provided by Wolpert's "French flag" paradigm. According to this view, intra-cellular genetic regulatory mechanisms use positional information provided by morphogen concentration gradients to differentially express distinct fates, resulting in a characteristic pattern of differentiated cells. However, recent experiments have shown that suppression of inter-cellular interactions can alter these spatial patterns, suggesting that cell fates are not exclusively determined by the regulation of gene expression by local morphogen concentration. Using an explicit model where adjacent cells communicate by Notch signaling, we provide a mechanistic description of how contact-mediated interactions allow information from the cellular environment to be incorporated into cell fate decisions. Viewing cellular differentiation in terms of trajectories along an epigenetic landscape (as first enunciated by Waddington), our results suggest that the contours of the landscape are moulded differently in a cell position-dependent manner, not only by the global signal provided by the morphogen but also by the local environment via cell-cell interactions. We show that our results are robust with respect to different choices of coupling between the inter-cellular signaling apparatus and the intra-cellular gene regulatory dynamics. Indeed, we show that the broad features can be observed even in abstract spin models. Our work reconciles interaction-mediated self-organized pattern formation with boundary-organized mechanisms involving signals that break symmetry. |
1905.05678 | Adam Noel | Adam Noel, Shayan Monabbati, Dimitrios Makrakis, Andrew W. Eckford | Modeling Interference-Free Neuron Spikes with Optogenetic Stimulation | 12 pages, 11 figures, 7 tables. Submitted for publication. Portions
of this work appeared previously as arXiv:1710.11569, which is the conference
version of this article | null | 10.1109/TMBMC.2020.2981655 | null | q-bio.NC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper predicts the ability to externally control the firing times of a
cortical neuron whose behavior follows the Izhikevich neuron model. The
Izhikevich neuron model provides an efficient and biologically plausible method
to track a cortical neuron's membrane potential and its firing times. The
external control is a simple optogenetic model represented by an illumination
source that stimulates a saturating or decaying membrane current. This paper
considers firing frequencies that are sufficiently low for the membrane
potential to return to its resting potential after it fires. The time required
for the neuron to charge and for the neuron to recover to the resting potential
are numerically fitted to functions of the Izhikevich neuron model parameters
and the peak input current. Results show that simple functions of the model
parameters and maximum input current can be used to predict the charging and
recovery times, even when there are deviations in the actual parameter values.
Furthermore, the predictions lead to lower bounds on the firing frequency that
can be achieved without significant distortion.
| [
{
"created": "Tue, 14 May 2019 15:43:27 GMT",
"version": "v1"
},
{
"created": "Sun, 29 Dec 2019 10:38:10 GMT",
"version": "v2"
}
] | 2020-04-24 | [
[
"Noel",
"Adam",
""
],
[
"Monabbati",
"Shayan",
""
],
[
"Makrakis",
"Dimitrios",
""
],
[
"Eckford",
"Andrew W.",
""
]
] | This paper predicts the ability to externally control the firing times of a cortical neuron whose behavior follows the Izhikevich neuron model. The Izhikevich neuron model provides an efficient and biologically plausible method to track a cortical neuron's membrane potential and its firing times. The external control is a simple optogenetic model represented by an illumination source that stimulates a saturating or decaying membrane current. This paper considers firing frequencies that are sufficiently low for the membrane potential to return to its resting potential after it fires. The time required for the neuron to charge and for the neuron to recover to the resting potential are numerically fitted to functions of the Izhikevich neuron model parameters and the peak input current. Results show that simple functions of the model parameters and maximum input current can be used to predict the charging and recovery times, even when there are deviations in the actual parameter values. Furthermore, the predictions lead to lower bounds on the firing frequency that can be achieved without significant distortion. |
1208.6552 | Frederick Matsen IV | David A. Nipperess and Frederick A. Matsen IV | The mean and variance of phylogenetic diversity under rarefaction | Final version to be published in Methods in Ecology and Evolution | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phylogenetic diversity (PD) depends on sampling intensity, which complicates
the comparison of PD between samples of different depth. One approach to
dealing with differing sample depth for a given diversity statistic is to
rarefy, which means to take a random subset of a given size of the original
sample. Exact analytical formulae for the mean and variance of species richness
under rarefaction have existed for some time but no such solution exists for
PD. We have derived exact formulae for the mean and variance of PD under
rarefaction. We show that these formulae are correct by comparing exact
solution mean and variance to that calculated by repeated random (Monte Carlo)
subsampling of a dataset of stem counts of woody shrubs of Toohey Forest,
Queensland, Australia. We also demonstrate the application of the method using
two examples: identifying hotspots of mammalian diversity in Australasian
ecoregions, and characterising the human vaginal microbiome. There is a very
high degree of correspondence between the analytical and random subsampling
methods for calculating mean and variance of PD under rarefaction, although the
Monte Carlo method requires a large number of random draws to converge on the
exact solution for the variance. Rarefaction of mammalian PD of ecoregions in
Australasia to a common standard of 25 species reveals very different rank
orderings of ecoregions, indicating quite different hotspots of diversity than
those obtained for unrarefied PD. The application of these methods to the
vaginal microbiome shows that a classical score used to quantify bacterial
vaginosis is correlated with the shape of the rarefaction curve. The analytical
formulae for the mean and variance of PD under rarefaction are both exact and
more efficient than repeated subsampling. Rarefaction of PD allows for many
applications where comparisons of samples of different depth is required.
| [
{
"created": "Fri, 31 Aug 2012 17:07:21 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Feb 2013 01:52:12 GMT",
"version": "v2"
}
] | 2015-03-20 | [
[
"Nipperess",
"David A.",
""
],
[
"Matsen",
"Frederick A.",
"IV"
]
] | Phylogenetic diversity (PD) depends on sampling intensity, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD. We have derived exact formulae for the mean and variance of PD under rarefaction. We show that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome. There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance. Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve. The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required. |
1903.04201 | Brian Mathias | Brian Mathias, Leona Sureth, Gesa Hartwigsen, Manuela Macedonia, Katja
M. Mayer, and Katharina von Kriegstein | A causal role of sensory cortices in behavioral benefits of 'learning by
doing' | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite a rise in the use of "learning by doing" pedagogical methods in
praxis, little is known as to how these methods improve learning outcomes. Here
we show that visual association cortex causally contributes to performance
benefits of a learning by doing method. This finding derives from transcranial
magnetic stimulation (TMS) and a gesture-enriched foreign language (L2)
vocabulary learning paradigm performed by 22 young adults. Inhibitory TMS of
visual motion cortex reduced learning outcomes for abstract and concrete
gesture-enriched words in comparison to sham stimulation. There were no TMS
effects on words learned with pictures. These results adjudicate between
opposing predictions of two neuroscientific learning theories: While
reactivation-based theories predict no functional role of visual motion cortex
in vocabulary learning outcomes, the current study supports the predictive
coding theory view that specialized sensory cortices precipitate
sensorimotor-based learning benefits.
| [
{
"created": "Mon, 11 Mar 2019 10:28:42 GMT",
"version": "v1"
},
{
"created": "Wed, 16 Oct 2019 00:45:27 GMT",
"version": "v2"
}
] | 2019-10-17 | [
[
"Mathias",
"Brian",
""
],
[
"Sureth",
"Leona",
""
],
[
"Hartwigsen",
"Gesa",
""
],
[
"Macedonia",
"Manuela",
""
],
[
"Mayer",
"Katja M.",
""
],
[
"von Kriegstein",
"Katharina",
""
]
] | Despite a rise in the use of "learning by doing" pedagogical methods in praxis, little is known as to how these methods improve learning outcomes. Here we show that visual association cortex causally contributes to performance benefits of a learning by doing method. This finding derives from transcranial magnetic stimulation (TMS) and a gesture-enriched foreign language (L2) vocabulary learning paradigm performed by 22 young adults. Inhibitory TMS of visual motion cortex reduced learning outcomes for abstract and concrete gesture-enriched words in comparison to sham stimulation. There were no TMS effects on words learned with pictures. These results adjudicate between opposing predictions of two neuroscientific learning theories: While reactivation-based theories predict no functional role of visual motion cortex in vocabulary learning outcomes, the current study supports the predictive coding theory view that specialized sensory cortices precipitate sensorimotor-based learning benefits. |
2204.01369 | Etienne Couturier | Manon Quiros, Marie-B\'eatrice Bogeat-Triboulot, Etienne Couturier,
Evelyne Kolb | Plant root growth against a mechanical obstacle: The early growth
response of a maize root facing an axial resistance agrees with the Lockhart
model | null | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | Plant root growth is dramatically reduced in compacted soils, affecting the
growth of the whole plant. Through a model experiment coupling force and
kinematics measurements, we probed the force-growth relationship of a primary
root contacting a stiff resisting obstacle, that mimics the strongest soil
impedance variation encountered by a growing root. The growth of maize roots
just emerging from a corseting agarose gel and contacting a force sensor
(acting as an obstacle) was monitored by time-lapse imaging simultaneously to
the force.
The evolution of the velocity field along the root was obtained from
kinematics analysis of the root texture with a PIV derived-technique. A
triangular fit was introduced to retrieve the elemental elongation rate or
strain rate. A parameter-free model based on the Lockhart law quantitatively
predicts how the force at the obstacle modifies several features of the growth
distribution (length of the growth zone, maximal elemental elongation rate,
velocity) during the first 10 minutes. These results suggest a strong
similarity of the early growth responses elicited either by a directional
stress (contact) or by an isotropic perturbation (hyperosmotic bath).
| [
{
"created": "Mon, 4 Apr 2022 10:29:17 GMT",
"version": "v1"
},
{
"created": "Mon, 9 May 2022 14:41:03 GMT",
"version": "v2"
}
] | 2022-05-10 | [
[
"Quiros",
"Manon",
""
],
[
"Bogeat-Triboulot",
"Marie-Béatrice",
""
],
[
"Couturier",
"Etienne",
""
],
[
"Kolb",
"Evelyne",
""
]
] | Plant root growth is dramatically reduced in compacted soils, affecting the growth of the whole plant. Through a model experiment coupling force and kinematics measurements, we probed the force-growth relationship of a primary root contacting a stiff resisting obstacle, that mimics the strongest soil impedance variation encountered by a growing root. The growth of maize roots just emerging from a corseting agarose gel and contacting a force sensor (acting as an obstacle) was monitored by time-lapse imaging simultaneously to the force. The evolution of the velocity field along the root was obtained from kinematics analysis of the root texture with a PIV derived-technique. A triangular fit was introduced to retrieve the elemental elongation rate or strain rate. A parameter-free model based on the Lockhart law quantitatively predicts how the force at the obstacle modifies several features of the growth distribution (length of the growth zone, maximal elemental elongation rate, velocity) during the first 10 minutes. These results suggest a strong similarity of the early growth responses elicited either by a directional stress (contact) or by an isotropic perturbation (hyperosmotic bath). |
2107.00748 | Mareike Fischer | Mareike Fischer and Andrew Francis and Kristina Wicke | Phylogenetic Diversity Rankings in the Face of Extinctions: the
Robustness of the Fair Proportion Index | null | null | null | null | q-bio.PE math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Planning for the protection of species often involves difficult choices about
which species to prioritize, given constrained resources. One way of
prioritizing species is to consider their "evolutionary distinctiveness", i.e.
their relative evolutionary isolation on a phylogenetic tree. Several
evolutionary isolation metrics or phylogenetic diversity indices have been
introduced in the literature, among them the so-called Fair Proportion index
(also known as the "evolutionary distinctiveness" score). This index apportions
the total diversity of a tree among all leaves, thereby providing a simple
prioritization criterion for conservation.
Here, we focus on the prioritization order obtained from the Fair Proportion
index and analyze the effects of species extinction on this ranking. More
precisely, we analyze the extent to which the ranking order may change when
some species go extinct and the Fair Proportion index is re-computed for the
remaining taxa. We show that for each phylogenetic tree, there are edge lengths
such that the extinction of one leaf per cherry completely reverses the
ranking. Moreover, we show that even if only the lowest ranked species goes
extinct, the ranking order may drastically change. We end by analyzing the
effects of these two extinction scenarios (extinction of the lowest ranked
species and extinction of one leaf per cherry) for a collection of empirical
and simulated trees. In both cases, we can observe significant changes in the
prioritization orders, highlighting the empirical relevance of our theoretical
findings.
| [
{
"created": "Thu, 1 Jul 2021 21:29:30 GMT",
"version": "v1"
}
] | 2021-07-05 | [
[
"Fischer",
"Mareike",
""
],
[
"Francis",
"Andrew",
""
],
[
"Wicke",
"Kristina",
""
]
] | Planning for the protection of species often involves difficult choices about which species to prioritize, given constrained resources. One way of prioritizing species is to consider their "evolutionary distinctiveness", i.e. their relative evolutionary isolation on a phylogenetic tree. Several evolutionary isolation metrics or phylogenetic diversity indices have been introduced in the literature, among them the so-called Fair Proportion index (also known as the "evolutionary distinctiveness" score). This index apportions the total diversity of a tree among all leaves, thereby providing a simple prioritization criterion for conservation. Here, we focus on the prioritization order obtained from the Fair Proportion index and analyze the effects of species extinction on this ranking. More precisely, we analyze the extent to which the ranking order may change when some species go extinct and the Fair Proportion index is re-computed for the remaining taxa. We show that for each phylogenetic tree, there are edge lengths such that the extinction of one leaf per cherry completely reverses the ranking. Moreover, we show that even if only the lowest ranked species goes extinct, the ranking order may drastically change. We end by analyzing the effects of these two extinction scenarios (extinction of the lowest ranked species and extinction of one leaf per cherry) for a collection of empirical and simulated trees. In both cases, we can observe significant changes in the prioritization orders, highlighting the empirical relevance of our theoretical findings. |
1005.0753 | Valery Mukhin | V. Mukhin, V. Klimenko | Mobilisation readiness state and the frequency structure of heart rate
variability | 14 pages, 5 figures | Ross Fiziol Zh Im I M Sechenova. 2009 Apr;95(4):367-75. | null | null | q-bio.TO q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A number of studies showed association of mental status with heart rate
variability. This work discovered a feature of frequency structure of heart
rate variability that is associated with mental readiness. In three independent
groups of 64, 39, and 19 volunteers by the factor analysis of heart rate
periodograms, it has been discovered that there are at least two other heart
rate oscillation phenomena apart from the well known low frequency oscillations
and respiratory arrhythmia. They have periods of 3 and 4 heart beats.
Association of amplitude of 3-beats oscillation with level of mental readiness
was shown due to further observation in two independent groups of 12 and 7.
Moreover, possibility of assessment of mental readiness by the mathematical
model based on heart rate periodogram was suggested.
| [
{
"created": "Wed, 5 May 2010 13:59:53 GMT",
"version": "v1"
}
] | 2010-05-07 | [
[
"Mukhin",
"V.",
""
],
[
"Klimenko",
"V.",
""
]
] | A number of studies showed association of mental status with heart rate variability. This work discovered a feature of frequency structure of heart rate variability that is associated with mental readiness. In three independent groups of 64, 39, and 19 volunteers by the factor analysis of heart rate periodograms, it has been discovered that there are at least two other heart rate oscillation phenomena apart from the well known low frequency oscillations and respiratory arrhythmia. They have periods of 3 and 4 heart beats. Association of amplitude of 3-beats oscillation with level of mental readiness was shown due to further observation in two independent groups of 12 and 7. Moreover, possibility of assessment of mental readiness by the mathematical model based on heart rate periodogram was suggested. |
1106.1381 | Tom Kelsey | T W Kelsey, P Wright, S M Nelson, R A Anderson and W H B Wallace | A validated model of serum anti-Mullerian hormone from conception to
menopause | null | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Anti-Mullerian hormone (AMH) is a product of growing ovarian
follicles. The concentration of AMH in blood may also reflect the non-growing
follicle (NGF) population, i.e. the ovarian reserve, and be of value in
predicting reproductive lifespan. A full description of AMH production up to
the menopause has not been reported. Methodology/Principal Findings: By
searching the published literature for AMH concentrations in healthy
pre-menopausal females, and using our own data (combined n = 3,260) we have
generated and robustly validated the first model of AMH concentration from
conception to menopause. This model shows that 34% of the variation in AMH is
due to age alone. We have shown that AMH peaks at age 24.5 years, followed by a
decline to the menopause. We have also shown that there is a neonatal peak and
a potential pre-pubertal peak. Our model allows us to generate normative data
at all ages. Conclusions/Significance: These data highlight key inflection
points in ovarian follicle dynamics. This first validated model of circulating
AMH in healthy females describes a transition period in early adulthood, after
which AMH reflects the progressive loss of the NGF pool. The existence of a
neonatal increase in gonadal activity is confirmed for females. An improved
understanding of the relationship between circulating AMH and age will lead
more accurate assessment of ovarian reserve for the individual woman.
| [
{
"created": "Tue, 7 Jun 2011 16:00:04 GMT",
"version": "v1"
}
] | 2011-06-08 | [
[
"Kelsey",
"T W",
""
],
[
"Wright",
"P",
""
],
[
"Nelson",
"S M",
""
],
[
"Anderson",
"R A",
""
],
[
"Wallace",
"W H B",
""
]
] | Background: Anti-Mullerian hormone (AMH) is a product of growing ovarian follicles. The concentration of AMH in blood may also reflect the non-growing follicle (NGF) population, i.e. the ovarian reserve, and be of value in predicting reproductive lifespan. A full description of AMH production up to the menopause has not been reported. Methodology/Principal Findings: By searching the published literature for AMH concentrations in healthy pre-menopausal females, and using our own data (combined n = 3,260) we have generated and robustly validated the first model of AMH concentration from conception to menopause. This model shows that 34% of the variation in AMH is due to age alone. We have shown that AMH peaks at age 24.5 years, followed by a decline to the menopause. We have also shown that there is a neonatal peak and a potential pre-pubertal peak. Our model allows us to generate normative data at all ages. Conclusions/Significance: These data highlight key inflection points in ovarian follicle dynamics. This first validated model of circulating AMH in healthy females describes a transition period in early adulthood, after which AMH reflects the progressive loss of the NGF pool. The existence of a neonatal increase in gonadal activity is confirmed for females. An improved understanding of the relationship between circulating AMH and age will lead more accurate assessment of ovarian reserve for the individual woman. |
2003.06188 | Larissa Terumi Arashiro | Larissa T. Arashiro, Ivet Ferrer, Diederik P.L. Rousseau, Stijn W.H.
Van Hulle, Marianna Garfi | The effect of primary treatment of wastewater in high rate algal pond
systems: biomass and bioenergy recovery | null | Bioresource Technology 280, 27-36 (2019) | 10.1016/j.biortech.2019.01.096 | null | q-bio.QM q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The aim of this study was to assess the effect of primary treatment on the
performance of two pilot-scale high rate algal ponds (HRAPs) treating urban
wastewater, considering their treatment efficiency, biomass productivity,
characteristics and biogas production potential. Results indicated that the
primary treatment did not significantly affect the wastewater treatment
efficiency (NH4+-N removal of 93 and 91% and COD removal of 62 and 65% in HRAP
with and without primary treatment, respectively). The HRAP without primary
treatment had higher biodiversity and productivity (18 vs. 16 g VSS/m2d).
Biomass from both systems presented good settling capacity. Results of
biochemical methane potential test showed that co-digesting microalgae and
primary sludge led to higher methane yields (238 - 258 mL CH4/g VS) compared
with microalgae mono-digestion (189 - 225 mL CH4/g VS). Overall, HRAPs with and
without primary treatment seem to be appropriate alternatives for combining
wastewater treatment and bioenergy recovery.
| [
{
"created": "Fri, 13 Mar 2020 10:25:01 GMT",
"version": "v1"
}
] | 2020-03-16 | [
[
"Arashiro",
"Larissa T.",
""
],
[
"Ferrer",
"Ivet",
""
],
[
"Rousseau",
"Diederik P. L.",
""
],
[
"Van Hulle",
"Stijn W. H.",
""
],
[
"Garfi",
"Marianna",
""
]
] | The aim of this study was to assess the effect of primary treatment on the performance of two pilot-scale high rate algal ponds (HRAPs) treating urban wastewater, considering their treatment efficiency, biomass productivity, characteristics and biogas production potential. Results indicated that the primary treatment did not significantly affect the wastewater treatment efficiency (NH4+-N removal of 93 and 91% and COD removal of 62 and 65% in HRAP with and without primary treatment, respectively). The HRAP without primary treatment had higher biodiversity and productivity (18 vs. 16 g VSS/m2d). Biomass from both systems presented good settling capacity. Results of biochemical methane potential test showed that co-digesting microalgae and primary sludge led to higher methane yields (238 - 258 mL CH4/g VS) compared with microalgae mono-digestion (189 - 225 mL CH4/g VS). Overall, HRAPs with and without primary treatment seem to be appropriate alternatives for combining wastewater treatment and bioenergy recovery. |
2102.10040 | Archan Mukhopadhyay | Archan Mukhopadhyay and Sagar Chakraborty | Deciphering chaos in evolutionary games | null | Chaos: An Interdisciplinary Journal of Nonlinear Science 30,
121104 (2020) | 10.1063/5.0029480 | null | q-bio.PE physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Discrete-time replicator map is a prototype of evolutionary selection game
dynamical models that have been very successful across disciplines in rendering
insights into the attainment of the equilibrium outcomes, like the Nash
equilibrium and the evolutionarily stable strategy. By construction, only the
fixed point solutions of the dynamics can possibly be interpreted as the
aforementioned game-theoretic solution concepts. Although more complex outcomes
like chaos are omnipresent in the nature, it is not known to which
game-theoretic solutions they correspond. Here we construct a game-theoretic
solution that is realized as the chaotic outcomes in the selection monotone
game dynamic. To this end, we invoke the idea that in a population game having
two-player--two-strategy one-shot interactions, it is the product of the
fitness and the heterogeneity (the probability of finding two individuals
playing different strategies in the infinitely large population) that is
optimized over the generations of the evolutionary process.
| [
{
"created": "Wed, 17 Feb 2021 14:36:43 GMT",
"version": "v1"
}
] | 2021-02-22 | [
[
"Mukhopadhyay",
"Archan",
""
],
[
"Chakraborty",
"Sagar",
""
]
] | Discrete-time replicator map is a prototype of evolutionary selection game dynamical models that have been very successful across disciplines in rendering insights into the attainment of the equilibrium outcomes, like the Nash equilibrium and the evolutionarily stable strategy. By construction, only the fixed point solutions of the dynamics can possibly be interpreted as the aforementioned game-theoretic solution concepts. Although more complex outcomes like chaos are omnipresent in the nature, it is not known to which game-theoretic solutions they correspond. Here we construct a game-theoretic solution that is realized as the chaotic outcomes in the selection monotone game dynamic. To this end, we invoke the idea that in a population game having two-player--two-strategy one-shot interactions, it is the product of the fitness and the heterogeneity (the probability of finding two individuals playing different strategies in the infinitely large population) that is optimized over the generations of the evolutionary process. |
1602.08889 | Govardhan Reddy | Hiranmay Maity and Govardhan Reddy | Folding of Protein L with implications for collapse in the denatured
state ensemble | 5 figures and Supplementary Information | null | 10.1021/jacs.5b11300 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A fundamental question in protein folding is whether the coil to globule
collapse transition occurs during the initial stages of folding (burst-phase)
or simultaneously with the protein folding transition. Single molecule
fluorescence resonance energy transfer (FRET) and small angle X-ray scattering
(SAXS) experiments disagree on whether Protein L collapse transition occurs
during the burst-phase of folding. We study Protein L folding using a
coarse-grained model and molecular dynamics simulations. The collapse
transition in Protein L is found to be concomitant with the folding transition.
In the burst-phase of folding, we find that FRET experiments overestimate
radius of gyration, $R_g$, of the protein due to the application of Gaussian
polymer chain end-to-end distribution to extract $R_g$ from the FRET
efficiency. FRET experiments estimate $\approx$ 6\AA \ decrease in $R_g$ when
the actual decrease is $\approx$ 3\AA \ on Guanidinium Chloride denaturant
dilution from 7.5M to 1M, and thereby suggesting pronounced compaction in the
protein dimensions in the burst-phase. The $\approx$ 3\AA \ decrease is close
to the statistical uncertainties of the $R_g$ data measured from SAXS
experiments, which suggest no compaction, leading to a disagreement with the
FRET experiments. The transition state ensemble (TSE) structures in Protein L
folding are globular and extensive in agreement with the $\Psi$-analysis
experiments. The results support the hypothesis that the TSE of single domain
proteins depend on protein topology, and are not stabilised by local
interactions alone.
| [
{
"created": "Mon, 29 Feb 2016 10:01:54 GMT",
"version": "v1"
}
] | 2016-03-01 | [
[
"Maity",
"Hiranmay",
""
],
[
"Reddy",
"Govardhan",
""
]
] | A fundamental question in protein folding is whether the coil to globule collapse transition occurs during the initial stages of folding (burst-phase) or simultaneously with the protein folding transition. Single molecule fluorescence resonance energy transfer (FRET) and small angle X-ray scattering (SAXS) experiments disagree on whether Protein L collapse transition occurs during the burst-phase of folding. We study Protein L folding using a coarse-grained model and molecular dynamics simulations. The collapse transition in Protein L is found to be concomitant with the folding transition. In the burst-phase of folding, we find that FRET experiments overestimate radius of gyration, $R_g$, of the protein due to the application of Gaussian polymer chain end-to-end distribution to extract $R_g$ from the FRET efficiency. FRET experiments estimate $\approx$ 6\AA \ decrease in $R_g$ when the actual decrease is $\approx$ 3\AA \ on Guanidinium Chloride denaturant dilution from 7.5M to 1M, and thereby suggesting pronounced compaction in the protein dimensions in the burst-phase. The $\approx$ 3\AA \ decrease is close to the statistical uncertainties of the $R_g$ data measured from SAXS experiments, which suggest no compaction, leading to a disagreement with the FRET experiments. The transition state ensemble (TSE) structures in Protein L folding are globular and extensive in agreement with the $\Psi$-analysis experiments. The results support the hypothesis that the TSE of single domain proteins depend on protein topology, and are not stabilised by local interactions alone. |
2309.04602 | Michael Kuczynski | Michael T. Kuczynski, Nathan J. Neeteson, Kathryn S. Stok, Andrew J.
Burghardt, Michelle A. Espinosa Hernandez, Jared Vicory, Justin J. Tse,
Pholpat Durongbhan, Serena Bonaretti, Andy Kin On Wong, Steven K. Boyd, Sarah
L. Manske | ORMIR_XCT: A Python package for high resolution peripheral quantitative
computed tomography image processing | null | Journal of Open Source Software, 9(97), 6084 (2024) | 10.21105/joss.06084 | null | q-bio.QM | http://creativecommons.org/licenses/by-sa/4.0/ | High resolution peripheral quantitative computed tomography (HR-pQCT) is an
imaging technique capable of imaging trabecular bone in-vivo. HR-pQCT has a
wide range of applications, primarily focused on bone to improve our
understanding of musculoskeletal diseases, assess epidemiological associations,
and evaluate the effects of pharmaceutical interventions. Processing HR-pQCT
images has largely been supported using the scanner manufacturer scripting
language (Image Processing Language, IPL, Scanco Medical). However, by
expanding image processing workflows outside of the scanner manufacturer
software environment, users have the flexibility to apply more advanced
mathematical techniques and leverage modern software packages to improve image
processing. The ORMIR_XCT Python package was developed to reimplement some
existing IPL workflows and provide an open and reproducible package allowing
for the development of advanced HR-pQCT data processing workflows.
| [
{
"created": "Fri, 8 Sep 2023 21:27:11 GMT",
"version": "v1"
}
] | 2024-06-21 | [
[
"Kuczynski",
"Michael T.",
""
],
[
"Neeteson",
"Nathan J.",
""
],
[
"Stok",
"Kathryn S.",
""
],
[
"Burghardt",
"Andrew J.",
""
],
[
"Hernandez",
"Michelle A. Espinosa",
""
],
[
"Vicory",
"Jared",
""
],
[
"Tse",
... | High resolution peripheral quantitative computed tomography (HR-pQCT) is an imaging technique capable of imaging trabecular bone in-vivo. HR-pQCT has a wide range of applications, primarily focused on bone to improve our understanding of musculoskeletal diseases, assess epidemiological associations, and evaluate the effects of pharmaceutical interventions. Processing HR-pQCT images has largely been supported using the scanner manufacturer scripting language (Image Processing Language, IPL, Scanco Medical). However, by expanding image processing workflows outside of the scanner manufacturer software environment, users have the flexibility to apply more advanced mathematical techniques and leverage modern software packages to improve image processing. The ORMIR_XCT Python package was developed to reimplement some existing IPL workflows and provide an open and reproducible package allowing for the development of advanced HR-pQCT data processing workflows. |
1308.2007 | Wei Zhang | Wei Zhang, Tong Zhou, Shwu-Fan Ma, Robert F. Machado, Sangeeta M.
Bhorade, Joe G.N. Garcia | MicroRNAs Implicated in Dysregulation of Gene Expression Following Human
Lung Transplantation | null | Transl Respir Med. 2013; 1: 12 | 10.1186/2213-0802-1-12 | null | q-bio.GN | http://creativecommons.org/licenses/by/3.0/ | Lung transplantation remains the only viable treatment option for the
majority of patients with advanced lung diseases. However, 5-year
post-transplant survival rates remain low primarily secondary to chronic
rejection. Novel insights from global gene expression profiles may provide
molecular phenotypes and therapeutic targets to improve outcomes after lung
transplantation. We showed the presence of a significant number of dysregulated
genes, particularly those genes involved in pathways and biological processes
such as immune response and defense, in the PBMCs derived from a cohort of
patients after lung transplantation. The contribution of miRNAs in regulating
these differential genes was also demonstrated.
| [
{
"created": "Fri, 9 Aug 2013 01:08:52 GMT",
"version": "v1"
}
] | 2013-08-12 | [
[
"Zhang",
"Wei",
""
],
[
"Zhou",
"Tong",
""
],
[
"Ma",
"Shwu-Fan",
""
],
[
"Machado",
"Robert F.",
""
],
[
"Bhorade",
"Sangeeta M.",
""
],
[
"Garcia",
"Joe G. N.",
""
]
] | Lung transplantation remains the only viable treatment option for the majority of patients with advanced lung diseases. However, 5-year post-transplant survival rates remain low primarily secondary to chronic rejection. Novel insights from global gene expression profiles may provide molecular phenotypes and therapeutic targets to improve outcomes after lung transplantation. We showed the presence of a significant number of dysregulated genes, particularly those genes involved in pathways and biological processes such as immune response and defense, in the PBMCs derived from a cohort of patients after lung transplantation. The contribution of miRNAs in regulating these differential genes was also demonstrated. |
1008.0335 | Jose A Capitan | Jose A. Capitan and Jose A. Cuesta | Catastrophic regime shifts in model ecological communities are true
phase transitions | 19 pages, 11 figures, revised version | Journal of Statistical Mechanics: Theory and Experiment 10, P10003
(2010) | 10.1088/1742-5468/2010/10/P10003 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ecosystems often undergo abrupt regime shifts in response to gradual external
changes. These shifts are theoretically understood as a regime switch between
alternative stable states of the ecosystem dynamical response to smooth changes
in external conditions. Usual models introduce nonlinearities in the
macroscopic dynamics of the ecosystem that lead to different stable attractors
among which the shift takes place. Here we propose an alternative explanation
of catastrophic regime shifts based on a recent model that pictures ecological
communities as systems in continuous fluctuation, according to certain
transition probabilities, between different micro-states in the phase space of
viable communities. We introduce a spontaneous extinction rate that accounts
for gradual changes in external conditions, and upon variations on this control
parameter the system undergoes a regime shift with similar features to those
previously reported. Under our microscopic viewpoint we recover the main
results obtained in previous theoretical and empirical work (anomalous
variance, hysteresis cycles, trophic cascades). The model predicts a gradual
loss of species in trophic levels from bottom to top near the transition. But
more importantly, the spectral analysis of the transition probability matrix
allows us to rigorously establish that we are observing the fingerprints, in a
finite size system, of a true phase transition driven by background
extinctions.
| [
{
"created": "Mon, 2 Aug 2010 16:28:33 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Sep 2010 09:49:21 GMT",
"version": "v2"
}
] | 2015-02-18 | [
[
"Capitan",
"Jose A.",
""
],
[
"Cuesta",
"Jose A.",
""
]
] | Ecosystems often undergo abrupt regime shifts in response to gradual external changes. These shifts are theoretically understood as a regime switch between alternative stable states of the ecosystem dynamical response to smooth changes in external conditions. Usual models introduce nonlinearities in the macroscopic dynamics of the ecosystem that lead to different stable attractors among which the shift takes place. Here we propose an alternative explanation of catastrophic regime shifts based on a recent model that pictures ecological communities as systems in continuous fluctuation, according to certain transition probabilities, between different micro-states in the phase space of viable communities. We introduce a spontaneous extinction rate that accounts for gradual changes in external conditions, and upon variations on this control parameter the system undergoes a regime shift with similar features to those previously reported. Under our microscopic viewpoint we recover the main results obtained in previous theoretical and empirical work (anomalous variance, hysteresis cycles, trophic cascades). The model predicts a gradual loss of species in trophic levels from bottom to top near the transition. But more importantly, the spectral analysis of the transition probability matrix allows us to rigorously establish that we are observing the fingerprints, in a finite size system, of a true phase transition driven by background extinctions. |
0802.2967 | Kingsley Cox | Kingsley J.A. Cox, Paul R. Adams | Hebbian Crosstalk Prevents Nonlinear Unsupervised Learning | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning is thought to occur by localized, experience-induced changes in the
strength of synaptic connections between neurons. Recent work has shown that
activity-dependent changes at one connection can affect changes at others
(crosstalk). We studied the role of such crosstalk in nonlinear Hebbian
learning using a neural network implementation of Independent Components
Analysis (ICA). We find that there is a sudden qualitative change in the
performance of the network at a critical crosstalk level and discuss the
implications of this for nonlinear learning from higher-order correlations in
the neocortex.
| [
{
"created": "Thu, 21 Feb 2008 00:49:05 GMT",
"version": "v1"
}
] | 2008-02-22 | [
[
"Cox",
"Kingsley J. A.",
""
],
[
"Adams",
"Paul R.",
""
]
] | Learning is thought to occur by localized, experience-induced changes in the strength of synaptic connections between neurons. Recent work has shown that activity-dependent changes at one connection can affect changes at others (crosstalk). We studied the role of such crosstalk in nonlinear Hebbian learning using a neural network implementation of Independent Components Analysis (ICA). We find that there is a sudden qualitative change in the performance of the network at a critical crosstalk level and discuss the implications of this for nonlinear learning from higher-order correlations in the neocortex. |
1208.2666 | Christoph Adami | Christoph Adami and Arend Hintze | Evolutionary instability of Zero Determinant strategies demonstrates
that winning isn't everything | 14 pages, 4 figures. Change in title (again!) to comply with Nature
Communications requirements. To appear in Nature Communications | Nature Communications 4 (2013) 2193 | 10.1038/ncomms3193 | null | q-bio.PE nlin.AO q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Zero Determinant (ZD) strategies are a new class of probabilistic and
conditional strategies that are able to unilaterally set the expected payoff of
an opponent in iterated plays of the Prisoner's Dilemma irrespective of the
opponent's strategy, or else to set the ratio between a ZD player's and their
opponent's expected payoff. Here we show that while ZD strategies are weakly
dominant, they are not evolutionarily stable and will instead evolve into less
coercive strategies. We show that ZD strategies with an informational advantage
over other players that allows them to recognize other ZD strategies can be
evolutionarily stable (and able to exploit other players). However, such an
advantage is bound to be short-lived as opposing strategies evolve to
counteract the recognition.
| [
{
"created": "Mon, 13 Aug 2012 19:00:24 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Oct 2012 17:59:44 GMT",
"version": "v2"
},
{
"created": "Fri, 21 Jun 2013 14:01:18 GMT",
"version": "v3"
},
{
"created": "Tue, 25 Jun 2013 13:16:31 GMT",
"version": "v4"
}
] | 2013-08-07 | [
[
"Adami",
"Christoph",
""
],
[
"Hintze",
"Arend",
""
]
] | Zero Determinant (ZD) strategies are a new class of probabilistic and conditional strategies that are able to unilaterally set the expected payoff of an opponent in iterated plays of the Prisoner's Dilemma irrespective of the opponent's strategy, or else to set the ratio between a ZD player's and their opponent's expected payoff. Here we show that while ZD strategies are weakly dominant, they are not evolutionarily stable and will instead evolve into less coercive strategies. We show that ZD strategies with an informational advantage over other players that allows them to recognize other ZD strategies can be evolutionarily stable (and able to exploit other players). However, such an advantage is bound to be short-lived as opposing strategies evolve to counteract the recognition. |
1509.01206 | Raimondo D'Ambrosio | Raimondo D'Ambrosio, Clifford L. Eastman, John W. Miller | Inadequate experimental methods and erroneous epilepsy diagnostic
criteria result in confounding acquired focal epilepsy with genetic absence
epilepsy | Supplementary document, 10 pages | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Here we provide a thorough discussion of the study conducted by Rodgers et
al. (J Neurosci. 2015; 35(24):9194-204. doi: 10.1523/JNEUROSCI.0919-15.2015) to
investigate focal seizures and acquired epileptogenesis induced by head injury
in the rat. This manuscript serves as supplementary document for our letter to
the Editor to appear in the Journal of Neuroscience. We find that the subject
article suffers from poor experimental design, very selective consideration of
antecedent literature, and application of inappropriate epilepsy diagnostic
criteria which, together, lead to unwarranted conclusions.
| [
{
"created": "Thu, 3 Sep 2015 18:48:46 GMT",
"version": "v1"
}
] | 2015-09-04 | [
[
"D'Ambrosio",
"Raimondo",
""
],
[
"Eastman",
"Clifford L.",
""
],
[
"Miller",
"John W.",
""
]
] | Here we provide a thorough discussion of the study conducted by Rodgers et al. (J Neurosci. 2015; 35(24):9194-204. doi: 10.1523/JNEUROSCI.0919-15.2015) to investigate focal seizures and acquired epileptogenesis induced by head injury in the rat. This manuscript serves as supplementary document for our letter to the Editor to appear in the Journal of Neuroscience. We find that the subject article suffers from poor experimental design, very selective consideration of antecedent literature, and application of inappropriate epilepsy diagnostic criteria which, together, lead to unwarranted conclusions. |
2211.05220 | Christiaan Swanepoel | Christiaan Swanepoel, Mathieu Fourment, Xiang Ji, Hassan Nasif, Marc A
Suchard, Frederick A Matsen IV, Alexei Drummond | TreeFlow: probabilistic programming and automatic differentiation for
phylogenetics | 34 pages, 8 figures | null | null | null | q-bio.PE stat.CO | http://creativecommons.org/licenses/by/4.0/ | Probabilistic programming frameworks are powerful tools for statistical
modelling and inference. They are not immediately generalisable to phylogenetic
problems due to the particular computational properties of the phylogenetic
tree object. TreeFlow is a software library for probabilistic programming and
automatic differentiation with phylogenetic trees. It implements inference
algorithms for phylogenetic tree times and model parameters given a tree
topology. We demonstrate how TreeFlow can be used to quickly implement and
assess new models. We also show that it provides reasonable performance for
gradient-based inference algorithms compared to specialized computational
libraries for phylogenetics.
| [
{
"created": "Wed, 9 Nov 2022 22:04:50 GMT",
"version": "v1"
}
] | 2022-11-11 | [
[
"Swanepoel",
"Christiaan",
""
],
[
"Fourment",
"Mathieu",
""
],
[
"Ji",
"Xiang",
""
],
[
"Nasif",
"Hassan",
""
],
[
"Suchard",
"Marc A",
""
],
[
"Matsen",
"Frederick A",
"IV"
],
[
"Drummond",
"Alexei",
""
... | Probabilistic programming frameworks are powerful tools for statistical modelling and inference. They are not immediately generalisable to phylogenetic problems due to the particular computational properties of the phylogenetic tree object. TreeFlow is a software library for probabilistic programming and automatic differentiation with phylogenetic trees. It implements inference algorithms for phylogenetic tree times and model parameters given a tree topology. We demonstrate how TreeFlow can be used to quickly implement and assess new models. We also show that it provides reasonable performance for gradient-based inference algorithms compared to specialized computational libraries for phylogenetics. |
1510.06794 | Akira Kinjo | Akira R. Kinjo | Liquid-theory analogy of direct-coupling analysis of multiple-sequence
alignment and its implications for protein structure prediction | 3 pages, 1 figure | Biophysics and Physicobiology Vol. 12 (2015) pp. 117-119 | 10.2142/biophysico.12.0_117 | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | The direct-coupling analysis is a powerful method for protein contact
prediction, and enables us to extract "direct" correlations between distant
sites that are latent in "indirect" correlations observed in a protein
multiple-sequence alignment. I show that the direct correlation can be obtained
by using a formulation analogous to the Ornstein-Zernike integral equation in
liquid theory. This formulation intuitively illustrates how the indirect or
apparent correlation arises from an infinite series of direct correlations, and
provides interesting insights into protein structure prediction.
| [
{
"created": "Fri, 23 Oct 2015 00:44:50 GMT",
"version": "v1"
},
{
"created": "Tue, 10 Nov 2015 06:18:26 GMT",
"version": "v2"
}
] | 2015-12-15 | [
[
"Kinjo",
"Akira R.",
""
]
] | The direct-coupling analysis is a powerful method for protein contact prediction, and enables us to extract "direct" correlations between distant sites that are latent in "indirect" correlations observed in a protein multiple-sequence alignment. I show that the direct correlation can be obtained by using a formulation analogous to the Ornstein-Zernike integral equation in liquid theory. This formulation intuitively illustrates how the indirect or apparent correlation arises from an infinite series of direct correlations, and provides interesting insights into protein structure prediction. |
1311.2757 | Michael Baudis MD | Haoyang Cai, Nitin Kumar, Ni Ai, Saumya Gupta, Prisni Rath and Michael
Baudis | Progenetix: 12 years of oncogenomic data curation | Accepted at Nucleic Acid Research (NAR 2014 database issue) | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | DNA copy number aberrations (CNAs) can be found in the majority of cancer
genomes, and are crucial for understanding the potential mechanisms underlying
tumor initiation and progression. Since the first release in 2001, the
Progenetix project (http://www.progenetix.org) has provided a reference
resource dedicated to provide the most comprehensive collection of genome-wide
CNA profiles. Reflecting the application of comparative genomic hybridization
(CGH) techniques to tens of thousands of cancer genomes, over the past 12 years
our data curation efforts have resulted in a more than 60-fold increase in the
number of cancer samples presented through Progenetix. In addition, new data
exploration tools and visualization options have been added. In particular, the
gene specific CNA frequency analysis should facilitate the assignment of cancer
genes to related cancer types. Additionally, the new user file processing
interface allows users to take advantage of the online tools, including various
data representation options for proprietary data pre-publication. In this
update article, we report recent improvements of the database in terms of
content, user interface and online tools.
| [
{
"created": "Tue, 12 Nov 2013 12:40:00 GMT",
"version": "v1"
}
] | 2013-11-13 | [
[
"Cai",
"Haoyang",
""
],
[
"Kumar",
"Nitin",
""
],
[
"Ai",
"Ni",
""
],
[
"Gupta",
"Saumya",
""
],
[
"Rath",
"Prisni",
""
],
[
"Baudis",
"Michael",
""
]
] | DNA copy number aberrations (CNAs) can be found in the majority of cancer genomes, and are crucial for understanding the potential mechanisms underlying tumor initiation and progression. Since the first release in 2001, the Progenetix project (http://www.progenetix.org) has provided a reference resource dedicated to provide the most comprehensive collection of genome-wide CNA profiles. Reflecting the application of comparative genomic hybridization (CGH) techniques to tens of thousands of cancer genomes, over the past 12 years our data curation efforts have resulted in a more than 60-fold increase in the number of cancer samples presented through Progenetix. In addition, new data exploration tools and visualization options have been added. In particular, the gene specific CNA frequency analysis should facilitate the assignment of cancer genes to related cancer types. Additionally, the new user file processing interface allows users to take advantage of the online tools, including various data representation options for proprietary data pre-publication. In this update article, we report recent improvements of the database in terms of content, user interface and online tools. |
1905.04493 | Thierry Mora | Jacopo Marchi, Ezequiel A. Galpern, Rocio Espada, Diego U. Ferreiro,
Aleksandra M. Walczak, Thierry Mora | Size and structure of the sequence space of repeat proteins | null | PLoS Comput Biol 15(8): e1007282 (2019) | 10.1371/journal.pcbi.1007282 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The coding space of protein sequences is shaped by evolutionary constraints
set by requirements of function and stability. We show that the coding space of
a given protein family--the total number of sequences in that family--can be
estimated using models of maximum entropy trained on multiple sequence
alignments of naturally occuring amino acid sequences. We analyzed and
calculated the size of three abundant repeat proteins families, whose members
are large proteins made of many repetitions of conserved portions of ~ 30 amino
acids. While amino acid conservation at each position of the alignment explains
most of the reduction of diversity relative to completely random sequences, we
found that correlations between amino acid usage at different positions
significantly impact that diversity. We quantified the impact of different
types of correlations, functional and evolutionary, on sequence diversity.
Analysis of the detailed structure of the coding space of the families revealed
a rugged landscape, with many local energy minima of varying sizes with a
hierarchical structure, reminiscent of fustrated energy landscapes of spin
glass in physics. This clustered structure indicates a multiplicity of subtypes
within each family, and suggests new strategies for protein design.
| [
{
"created": "Sat, 11 May 2019 10:23:20 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Jul 2019 19:26:05 GMT",
"version": "v2"
}
] | 2020-11-20 | [
[
"Marchi",
"Jacopo",
""
],
[
"Galpern",
"Ezequiel A.",
""
],
[
"Espada",
"Rocio",
""
],
[
"Ferreiro",
"Diego U.",
""
],
[
"Walczak",
"Aleksandra M.",
""
],
[
"Mora",
"Thierry",
""
]
] | The coding space of protein sequences is shaped by evolutionary constraints set by requirements of function and stability. We show that the coding space of a given protein family--the total number of sequences in that family--can be estimated using models of maximum entropy trained on multiple sequence alignments of naturally occuring amino acid sequences. We analyzed and calculated the size of three abundant repeat proteins families, whose members are large proteins made of many repetitions of conserved portions of ~ 30 amino acids. While amino acid conservation at each position of the alignment explains most of the reduction of diversity relative to completely random sequences, we found that correlations between amino acid usage at different positions significantly impact that diversity. We quantified the impact of different types of correlations, functional and evolutionary, on sequence diversity. Analysis of the detailed structure of the coding space of the families revealed a rugged landscape, with many local energy minima of varying sizes with a hierarchical structure, reminiscent of fustrated energy landscapes of spin glass in physics. This clustered structure indicates a multiplicity of subtypes within each family, and suggests new strategies for protein design. |
2005.01804 | Ivan Viola | Ngan Nguyen, Ondrej Strnad, Tobias Klein, Deng Luo, Ruwayda Alharbi,
Peter Wonka, Martina Maritan, Peter Mindek, Ludovic Autin, David S. Goodsell,
Ivan Viola | Modeling in the Time of COVID-19: Statistical and Rule-based Mesoscale
Models | null | null | 10.1109/TVCG.2020.3030415 | null | q-bio.QM | http://creativecommons.org/licenses/by-sa/4.0/ | We present a new technique for rapid modeling and construction of
scientifically accurate mesoscale biological models. Resulting 3D models are
based on few 2D microscopy scans and the latest knowledge about the biological
entity represented as a set of geometric relationships. Our new technique is
based on statistical and rule-based modeling approaches that are rapid to
author, fast to construct, and easy to revise. From a few 2D microscopy scans,
we learn statistical properties of various structural aspects, such as the
outer membrane shape, spatial properties and distribution characteristics of
the macromolecular elements on the membrane. This information is utilized in 3D
model construction. Once all imaging evidence is incorporated in the model,
additional information can be incorporated by interactively defining rules that
spatially characterize the rest of the biological entity, such as mutual
interactions among macromolecules, their distances and orientations to other
structures. These rules are defined through an intuitive 3D interactive
visualization and modeling feedback loop. We demonstrate the utility of our
approach on a use case of the modeling procedure of the SARS-CoV-2 virus
particle ultrastructure. Its first complete atomistic model, which we present
here, can steer biological research to new promising directions in fighting
spread of the virus.
| [
{
"created": "Fri, 1 May 2020 15:55:18 GMT",
"version": "v1"
}
] | 2020-10-15 | [
[
"Nguyen",
"Ngan",
""
],
[
"Strnad",
"Ondrej",
""
],
[
"Klein",
"Tobias",
""
],
[
"Luo",
"Deng",
""
],
[
"Alharbi",
"Ruwayda",
""
],
[
"Wonka",
"Peter",
""
],
[
"Maritan",
"Martina",
""
],
[
"Mindek"... | We present a new technique for rapid modeling and construction of scientifically accurate mesoscale biological models. Resulting 3D models are based on few 2D microscopy scans and the latest knowledge about the biological entity represented as a set of geometric relationships. Our new technique is based on statistical and rule-based modeling approaches that are rapid to author, fast to construct, and easy to revise. From a few 2D microscopy scans, we learn statistical properties of various structural aspects, such as the outer membrane shape, spatial properties and distribution characteristics of the macromolecular elements on the membrane. This information is utilized in 3D model construction. Once all imaging evidence is incorporated in the model, additional information can be incorporated by interactively defining rules that spatially characterize the rest of the biological entity, such as mutual interactions among macromolecules, their distances and orientations to other structures. These rules are defined through an intuitive 3D interactive visualization and modeling feedback loop. We demonstrate the utility of our approach on a use case of the modeling procedure of the SARS-CoV-2 virus particle ultrastructure. Its first complete atomistic model, which we present here, can steer biological research to new promising directions in fighting spread of the virus. |
2105.03951 | Genki Ichinose | Azumi Mamiya, Daiki Miyagawa, Genki Ichinose | Conditions for the existence of zero-determinant strategies under
observation errors in repeated games | 25 pages, 4 figures | Journal of Theoretical Biology 526, 110810 (2021) | 10.1016/j.jtbi.2021.110810 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Repeated games are useful models to analyze long term interactions of living
species and complex social phenomena. Zero-determinant (ZD) strategies in
repeated games discovered by Press and Dyson in 2012 enforce a linear payoff
relationship between a focal player and the opponent. This linear relationship
can be set arbitrarily by a ZD player. Hence, a subclass of ZD strategies can
fix the opponent's expected payoff and another subclass of the strategies can
exceed the opponent for the expected payoff. Since this discovery, theories for
ZD strategies are extended to cope with various natural situations. It is
especially important to consider the theory of ZD strategies for repeated games
with a discount factor and observation errors because it allows the theory to
be applicable in the real world. Recent studies revealed their existence of ZD
strategies even in repeated games with both factors. However, the conditions
for the existence has not been sufficiently analyzed. Here, we mathematically
analyzed the conditions in repeated games with both factors. First, we derived
the thresholds of a discount factor and observation errors which ensure the
existence of Equalizer and positively correlated ZD (pcZD) strategies, which
are well-known subclasses of ZD strategies. We found that ZD strategies exist
only when a discount factor remains high as the error rates increase. Next, we
derived the conditions for the expected payoff of the opponent enforced by
Equalizer as well as the conditions for the slope and base line payoff of
linear lines enforced by pcZD. As a result, we found that, as error rates
increase or a discount factor decreases, the conditions for the linear line
that Equalizer or pcZD can enforce become strict.
| [
{
"created": "Sun, 9 May 2021 14:37:19 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Jun 2021 04:47:34 GMT",
"version": "v2"
}
] | 2021-06-29 | [
[
"Mamiya",
"Azumi",
""
],
[
"Miyagawa",
"Daiki",
""
],
[
"Ichinose",
"Genki",
""
]
] | Repeated games are useful models to analyze long term interactions of living species and complex social phenomena. Zero-determinant (ZD) strategies in repeated games discovered by Press and Dyson in 2012 enforce a linear payoff relationship between a focal player and the opponent. This linear relationship can be set arbitrarily by a ZD player. Hence, a subclass of ZD strategies can fix the opponent's expected payoff and another subclass of the strategies can exceed the opponent for the expected payoff. Since this discovery, theories for ZD strategies are extended to cope with various natural situations. It is especially important to consider the theory of ZD strategies for repeated games with a discount factor and observation errors because it allows the theory to be applicable in the real world. Recent studies revealed their existence of ZD strategies even in repeated games with both factors. However, the conditions for the existence has not been sufficiently analyzed. Here, we mathematically analyzed the conditions in repeated games with both factors. First, we derived the thresholds of a discount factor and observation errors which ensure the existence of Equalizer and positively correlated ZD (pcZD) strategies, which are well-known subclasses of ZD strategies. We found that ZD strategies exist only when a discount factor remains high as the error rates increase. Next, we derived the conditions for the expected payoff of the opponent enforced by Equalizer as well as the conditions for the slope and base line payoff of linear lines enforced by pcZD. As a result, we found that, as error rates increase or a discount factor decreases, the conditions for the linear line that Equalizer or pcZD can enforce become strict. |
1911.00526 | Homayoun Valafar | P. Shealy, R. Mukhopadhyay, S. Smith, and H. Valafar | Automated Assignment of Backbone Resonances Using Residual Dipolar
Couplings Acquired from a Protein with Known Structure | BioComp 2008, 7 pages | null | null | null | q-bio.BM cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Resonance assignment is a critical first step in the investigation of protein
structures using NMR spectroscopy. The development of assignment methods that
require less experimental data is possible with prior knowledge of the
macromolecular structure. Automated methods of performing the task of resonance
assignment can significantly reduce the financial cost and time requirement for
protein structure determination. Such methods can also be beneficial in
validating a protein's solution state structure. Here we present a new approach
to the assignment problem. Our approach uses only RDC data to assign backbone
resonances. It provides simultaneous order tensor estimation and assignment.
Our approach compares independent order tensor estimates to determine when the
correct order tensor has been found. We demonstrate the algorithm's viability
using simulated data from the protein domain 1A1Z.
| [
{
"created": "Fri, 1 Nov 2019 18:01:52 GMT",
"version": "v1"
}
] | 2019-11-05 | [
[
"Shealy",
"P.",
""
],
[
"Mukhopadhyay",
"R.",
""
],
[
"Smith",
"S.",
""
],
[
"Valafar",
"H.",
""
]
] | Resonance assignment is a critical first step in the investigation of protein structures using NMR spectroscopy. The development of assignment methods that require less experimental data is possible with prior knowledge of the macromolecular structure. Automated methods of performing the task of resonance assignment can significantly reduce the financial cost and time requirement for protein structure determination. Such methods can also be beneficial in validating a protein's solution state structure. Here we present a new approach to the assignment problem. Our approach uses only RDC data to assign backbone resonances. It provides simultaneous order tensor estimation and assignment. Our approach compares independent order tensor estimates to determine when the correct order tensor has been found. We demonstrate the algorithm's viability using simulated data from the protein domain 1A1Z. |
2112.03202 | Soumendranath Bhakat | Soumendranath Bhakat | Collective variable discovery in the age of machine learning: reality,
hype and everything in between | full length review written to submit in a peer review journal | null | null | null | q-bio.BM physics.chem-ph stat.ML | http://creativecommons.org/licenses/by/4.0/ | Understanding kinetics and thermodynamics profile of biomolecules is
necessary to understand their functional roles which has a major impact in
mechanism driven drug discovery. Molecular dynamics simulation has been
routinely used to understand conformational dynamics and molecular recognition
in biomolecules. Statistical analysis of high-dimensional spatiotemporal data
generated from molecular dynamics simulation requires identification of few
low-dimensional variables which can describe essential dynamics of a system
without significant loss of informations. In physical chemistry, these
low-dimensional variables often called collective variables. Collective
variables are used to generated reduced representation of free energy surface
and calculate transition probabilities between different metastable basins.
However the choice of collective variables is not trivial for complex systems.
Collective variables ranges from geometric criteria's such as distances,
dihedral angles to abstract ones such as weighted linear combinations of
multiple geometric variables. Advent of machine learning algorithms led to
increasing use of abstract collective variables to represent biomolecular
dynamics. In this review, I will highlight several nuances of commonly used
collective variables ranging from geometric to abstract ones. Further, I will
put forward some cases where machine learning based collective variables were
used to describe simple systems which in principle could have been described by
geometric ones. Finally, I will put forward my thoughts on artificial general
intelligence and how it can be used to discover and predict collective
variables from spatiotemporal data generated by molecular dynamics simulations.
| [
{
"created": "Mon, 6 Dec 2021 17:58:53 GMT",
"version": "v1"
}
] | 2021-12-07 | [
[
"Bhakat",
"Soumendranath",
""
]
] | Understanding kinetics and thermodynamics profile of biomolecules is necessary to understand their functional roles which has a major impact in mechanism driven drug discovery. Molecular dynamics simulation has been routinely used to understand conformational dynamics and molecular recognition in biomolecules. Statistical analysis of high-dimensional spatiotemporal data generated from molecular dynamics simulation requires identification of few low-dimensional variables which can describe essential dynamics of a system without significant loss of informations. In physical chemistry, these low-dimensional variables often called collective variables. Collective variables are used to generated reduced representation of free energy surface and calculate transition probabilities between different metastable basins. However the choice of collective variables is not trivial for complex systems. Collective variables ranges from geometric criteria's such as distances, dihedral angles to abstract ones such as weighted linear combinations of multiple geometric variables. Advent of machine learning algorithms led to increasing use of abstract collective variables to represent biomolecular dynamics. In this review, I will highlight several nuances of commonly used collective variables ranging from geometric to abstract ones. Further, I will put forward some cases where machine learning based collective variables were used to describe simple systems which in principle could have been described by geometric ones. Finally, I will put forward my thoughts on artificial general intelligence and how it can be used to discover and predict collective variables from spatiotemporal data generated by molecular dynamics simulations. |
1807.03541 | Amit Kumar Bedaka | Amit Kumar Bedaka and Ponnusamy Pandithevan | A CT image based finite element modelling to predict the mechanical
behaviour of human arm | International Conference on Biomedical Systems, Signals and Images,
Chennai, India, 2016 | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the present work, complex irregular bones and joints of the complete human
arm were developed in a computer-aided design environment. Finite element
analysis of an actual human arm was done to identify the distribution of stress
using von-Mises stress and maximum principal stress measures. The results
obtained from the present study revealed the region where, maximum stress was
developed for different loading and boundary conditions with different joint
rotations as obtained in the actual human arm. This subject specific analysis
helps to analyse the region of the arm in which the risk is more.
| [
{
"created": "Tue, 10 Jul 2018 09:15:56 GMT",
"version": "v1"
}
] | 2018-07-11 | [
[
"Bedaka",
"Amit Kumar",
""
],
[
"Pandithevan",
"Ponnusamy",
""
]
] | In the present work, complex irregular bones and joints of the complete human arm were developed in a computer-aided design environment. Finite element analysis of an actual human arm was done to identify the distribution of stress using von-Mises stress and maximum principal stress measures. The results obtained from the present study revealed the region where, maximum stress was developed for different loading and boundary conditions with different joint rotations as obtained in the actual human arm. This subject specific analysis helps to analyse the region of the arm in which the risk is more. |
1510.01070 | Stefania Melillo | Andrea Cavagna, Chiara Creato, Lorenzo Del Castello, Irene Giardina,
Stefania Melillo, Leonardo Parisi and Massimiliano Viale | Error control in the set-up of stereo camera systems for 3d animal
tracking | 14 pages, 9 figures | null | 10.1140/epjst/e2015-50102-3 | null | q-bio.QM cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Three-dimensional tracking of animal systems is the key to the comprehension
of collective behavior. Experimental data collected via a stereo camera system
allow the reconstruction of the 3d trajectories of each individual in the
group. Trajectories can then be used to compute some quantities of interest to
better understand collective motion, such as velocities, distances between
individuals and correlation functions. The reliability of the retrieved
trajectories is strictly related to the accuracy of the 3d reconstruction. In
this paper, we perform a careful analysis of the most significant errors
affecting 3d reconstruction, showing how the accuracy depends on the camera
system set-up and on the precision of the calibration parameters.
| [
{
"created": "Mon, 5 Oct 2015 09:16:50 GMT",
"version": "v1"
}
] | 2016-01-20 | [
[
"Cavagna",
"Andrea",
""
],
[
"Creato",
"Chiara",
""
],
[
"Del Castello",
"Lorenzo",
""
],
[
"Giardina",
"Irene",
""
],
[
"Melillo",
"Stefania",
""
],
[
"Parisi",
"Leonardo",
""
],
[
"Viale",
"Massimiliano",
... | Three-dimensional tracking of animal systems is the key to the comprehension of collective behavior. Experimental data collected via a stereo camera system allow the reconstruction of the 3d trajectories of each individual in the group. Trajectories can then be used to compute some quantities of interest to better understand collective motion, such as velocities, distances between individuals and correlation functions. The reliability of the retrieved trajectories is strictly related to the accuracy of the 3d reconstruction. In this paper, we perform a careful analysis of the most significant errors affecting 3d reconstruction, showing how the accuracy depends on the camera system set-up and on the precision of the calibration parameters. |
2210.01767 | Long Le | Long Le, Yao Li | Supervised Parameter Estimation of Neuron Populations from Multiple
Firing Events | 31 pages | null | null | null | q-bio.NC cs.LG cs.NE | http://creativecommons.org/licenses/by/4.0/ | The firing dynamics of biological neurons in mathematical models is often
determined by the model's parameters, representing the neurons' underlying
properties. The parameter estimation problem seeks to recover those parameters
of a single neuron or a neuron population from their responses to external
stimuli and interactions between themselves. Most common methods for tackling
this problem in the literature use some mechanistic models in conjunction with
either a simulation-based or solution-based optimization scheme. In this paper,
we study an automatic approach of learning the parameters of neuron populations
from a training set consisting of pairs of spiking series and parameter labels
via supervised learning. Unlike previous work, this automatic learning does not
require additional simulations at inference time nor expert knowledge in
deriving an analytical solution or in constructing some approximate models. We
simulate many neuronal populations with different parameter settings using a
stochastic neuron model. Using that data, we train a variety of supervised
machine learning models, including convolutional and deep neural networks,
random forest, and support vector regression. We then compare their performance
against classical approaches including a genetic search, Bayesian sequential
estimation, and a random walk approximate model. The supervised models almost
always outperform the classical methods in parameter estimation and spike
reconstruction errors, and computation expense. Convolutional neural network,
in particular, is the best among all models across all metrics. The supervised
models can also generalize to out-of-distribution data to a certain extent.
| [
{
"created": "Sun, 2 Oct 2022 03:17:05 GMT",
"version": "v1"
}
] | 2022-10-05 | [
[
"Le",
"Long",
""
],
[
"Li",
"Yao",
""
]
] | The firing dynamics of biological neurons in mathematical models is often determined by the model's parameters, representing the neurons' underlying properties. The parameter estimation problem seeks to recover those parameters of a single neuron or a neuron population from their responses to external stimuli and interactions between themselves. Most common methods for tackling this problem in the literature use some mechanistic models in conjunction with either a simulation-based or solution-based optimization scheme. In this paper, we study an automatic approach of learning the parameters of neuron populations from a training set consisting of pairs of spiking series and parameter labels via supervised learning. Unlike previous work, this automatic learning does not require additional simulations at inference time nor expert knowledge in deriving an analytical solution or in constructing some approximate models. We simulate many neuronal populations with different parameter settings using a stochastic neuron model. Using that data, we train a variety of supervised machine learning models, including convolutional and deep neural networks, random forest, and support vector regression. We then compare their performance against classical approaches including a genetic search, Bayesian sequential estimation, and a random walk approximate model. The supervised models almost always outperform the classical methods in parameter estimation and spike reconstruction errors, and computation expense. Convolutional neural network, in particular, is the best among all models across all metrics. The supervised models can also generalize to out-of-distribution data to a certain extent. |
1705.04725 | Andrew J. Rominger | A. J. Rominger, I. Overcast, H. Krehenwinkel, R. G. Gillespie, J.
Harte, M. J. Hickerson | Linking evolutionary and ecological theory illuminates non-equilibrium
biodiversity | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Whether or not biodiversity dynamics tend toward stable equilibria remains an
unsolved question in ecology and evolution with important implications for our
understanding of diversity and its conservation. Phylo/population genetic
models and macroecological theory represent two primary lenses through which we
view biodiversity. While phylo/population genetics provide an averaged view of
changes in demography and diversity over timescales of generations to
geological epochs, macroecology provides an ahistorical description of
commonness and rarity across contemporary co-occurring species. Our goal is to
combine these two approaches to gain novel insights into the non-equilibrium
nature of biodiversity. We help guide near future research with a call for
bioinformatic advances and an outline of quantitative predictions made possible
by our approach.
| [
{
"created": "Fri, 12 May 2017 19:19:48 GMT",
"version": "v1"
}
] | 2017-05-16 | [
[
"Rominger",
"A. J.",
""
],
[
"Overcast",
"I.",
""
],
[
"Krehenwinkel",
"H.",
""
],
[
"Gillespie",
"R. G.",
""
],
[
"Harte",
"J.",
""
],
[
"Hickerson",
"M. J.",
""
]
] | Whether or not biodiversity dynamics tend toward stable equilibria remains an unsolved question in ecology and evolution with important implications for our understanding of diversity and its conservation. Phylo/population genetic models and macroecological theory represent two primary lenses through which we view biodiversity. While phylo/population genetics provide an averaged view of changes in demography and diversity over timescales of generations to geological epochs, macroecology provides an ahistorical description of commonness and rarity across contemporary co-occurring species. Our goal is to combine these two approaches to gain novel insights into the non-equilibrium nature of biodiversity. We help guide near future research with a call for bioinformatic advances and an outline of quantitative predictions made possible by our approach. |
1902.10353 | Chong Yu | Chong Yu, Qiong Liu, Cong Chen and Jin Wang | A physical mechanism of heterogeneity in stem cell, cancer and cancer
stem cell | 7 pages, 2 figures | null | 10.1063/5.0078196 | null | q-bio.MN physics.bio-ph q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Heterogeneity is ubiquitous in stem cells (SC), cancer cells (CS), and cancer
stem cells (CSC). SC and CSC heterogeneity is manifested as diverse
sub-populations with self-renewing and unique regeneration capacity. Moreover,
the CSC progeny possesses multiple plasticity and cancerous characteristics.
Many studies have demonstrated that cancer heterogeneity is one of the greatest
obstacle for therapy. This leads to the incomplete anti-cancer therapies and
transitory efficacy. Furthermore, numerous micro-metastasis leads to the wide
spread of the tumor cells across the body which is the beginning of metastasis.
The epigenetic processes (DNA methylation or histone remodification etc.) can
provide a source for certain heterogeneity. In this study, we develop a
mathematical model to quantify the heterogeneity of SC, CSC and cancer taking
both genetic and epigenetic effects into consideration. We uncovered the roles
and physical mechanisms of heterogeneity from the three aspects (SC, CSC and
cancer). In the adiabatic regime (relatively fast regulatory binding and
effective coupling among genes), seven native states (SC, CSC, Cancer,
Premalignant, Normal, Lesion and Hyperplasia) emerge. In non-adiabatic regime
(relatively slow regulatory binding and effective weak coupling among genes),
multiple meta-stable SC, CS, CSC and differentiated states emerged which can
explain the origin of heterogeneity. In other words, the slow regulatory
binding mimicking the epigenetics can give rise to heterogeneity. Elucidating
the origin of heterogeneity and dynamical interrelationship between
intra-tumoral cells has clear clinical significance in helping to understand
the cellular basis of treatment response, therapeutic resistance, and tumor
relapse.
| [
{
"created": "Wed, 27 Feb 2019 06:35:23 GMT",
"version": "v1"
}
] | 2024-06-19 | [
[
"Yu",
"Chong",
""
],
[
"Liu",
"Qiong",
""
],
[
"Chen",
"Cong",
""
],
[
"Wang",
"Jin",
""
]
] | Heterogeneity is ubiquitous in stem cells (SC), cancer cells (CS), and cancer stem cells (CSC). SC and CSC heterogeneity is manifested as diverse sub-populations with self-renewing and unique regeneration capacity. Moreover, the CSC progeny possesses multiple plasticity and cancerous characteristics. Many studies have demonstrated that cancer heterogeneity is one of the greatest obstacle for therapy. This leads to the incomplete anti-cancer therapies and transitory efficacy. Furthermore, numerous micro-metastasis leads to the wide spread of the tumor cells across the body which is the beginning of metastasis. The epigenetic processes (DNA methylation or histone remodification etc.) can provide a source for certain heterogeneity. In this study, we develop a mathematical model to quantify the heterogeneity of SC, CSC and cancer taking both genetic and epigenetic effects into consideration. We uncovered the roles and physical mechanisms of heterogeneity from the three aspects (SC, CSC and cancer). In the adiabatic regime (relatively fast regulatory binding and effective coupling among genes), seven native states (SC, CSC, Cancer, Premalignant, Normal, Lesion and Hyperplasia) emerge. In non-adiabatic regime (relatively slow regulatory binding and effective weak coupling among genes), multiple meta-stable SC, CS, CSC and differentiated states emerged which can explain the origin of heterogeneity. In other words, the slow regulatory binding mimicking the epigenetics can give rise to heterogeneity. Elucidating the origin of heterogeneity and dynamical interrelationship between intra-tumoral cells has clear clinical significance in helping to understand the cellular basis of treatment response, therapeutic resistance, and tumor relapse. |
1505.06021 | Karan Pattni | Karan Pattni, Mark Broom, Jan Rychtar, Lara J. Silvers | Evolutionary graph theory revisited: general dynamics and the Moran
process | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evolution in finite populations is often modelled using the classical Moran
process. Over the last ten years this methodology has been extended to
structured populations using evolutionary graph theory. An important question
in any such population, is whether a rare mutant has a higher or lower chance
of fixating (the fixation probability) than the Moran probability, i.e. that
from the original Moran model, which represents an unstructured population. As
evolutionary graph theory has developed, different ways of considering the
interactions between individuals through a graph and an associated matrix of
weights have been considered, as have a number of important dynamics. In this
paper we revisit the original paper on evolutionary graph theory in light of
these extensions to consider these developments in an integrated way. In
particular we find general criteria for when an evolutionary graph with general
weights satisfies the Moran probability for the set of six common evolutionary
dynamics.
| [
{
"created": "Fri, 22 May 2015 10:40:21 GMT",
"version": "v1"
}
] | 2015-05-25 | [
[
"Pattni",
"Karan",
""
],
[
"Broom",
"Mark",
""
],
[
"Rychtar",
"Jan",
""
],
[
"Silvers",
"Lara J.",
""
]
] | Evolution in finite populations is often modelled using the classical Moran process. Over the last ten years this methodology has been extended to structured populations using evolutionary graph theory. An important question in any such population, is whether a rare mutant has a higher or lower chance of fixating (the fixation probability) than the Moran probability, i.e. that from the original Moran model, which represents an unstructured population. As evolutionary graph theory has developed, different ways of considering the interactions between individuals through a graph and an associated matrix of weights have been considered, as have a number of important dynamics. In this paper we revisit the original paper on evolutionary graph theory in light of these extensions to consider these developments in an integrated way. In particular we find general criteria for when an evolutionary graph with general weights satisfies the Moran probability for the set of six common evolutionary dynamics. |
1903.04907 | Masoud Farahmand | Masoud Farahmand, Minoo N. Kavarana, Ethan O. Kung | Risks and Benefits of Using a Commercially Available Ventricular Assist
Device for Failing Fontan Cavopulmonary Support: A Modeling Investigation | null | null | 10.1109/TBME.2019.2911470 | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fontan patients often develop circulatory failure and are in desperate need
of a therapeutic solution. A blood pump surgically placed in the cavopulmonary
pathway can substitute the function of the absent sub-pulmonary ventricle by
generating a mild pressure boost. However, there is currently no commercially
available device designed for the cavopulmonary application; and the risks and
benefits of implanting a ventricular assist device (VAD) originally designed
for the left ventricular application on the right circulation of failing Fontan
patients is not yet clear. Moreover, further research is needed to compare the
hemodynamics between the two clinically-considered surgical configurations
(Full Support and IVC Support) for cavopulmonary assist, with Full and IVC
Support corresponding to the entire venous return or only the inferior venous
return, respectively, being routed through the VAD. In this study, we used a
numerical model of the failing Fontan physiology to evaluate the Fontan
hemodynamic response to a left VAD during the IVC and Full supports. We
observed that during the Full support the VAD improved the cardiac output while
maintaining blood pressures within safe ranges, and lowered the IVC pressure to
<15mmHg; however, we found a potential risk of lung damage at higher pump
speeds due to the excessive pulmonary pressure elevation. IVC support the other
hand, did not benefit the hemodynamics of the example failing Fontan patients,
resulting in the superior vena cava pressure increasing to an unsafe level of
>20 mmHg. The findings in this study may be helpful to surgeons for recognizing
the risks of a cavopulmonary VAD and developing coherent clinical strategies
for the implementation of cavopulmonary support.
| [
{
"created": "Sat, 9 Mar 2019 19:28:34 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Apr 2019 16:50:39 GMT",
"version": "v2"
}
] | 2019-04-19 | [
[
"Farahmand",
"Masoud",
""
],
[
"Kavarana",
"Minoo N.",
""
],
[
"Kung",
"Ethan O.",
""
]
] | Fontan patients often develop circulatory failure and are in desperate need of a therapeutic solution. A blood pump surgically placed in the cavopulmonary pathway can substitute the function of the absent sub-pulmonary ventricle by generating a mild pressure boost. However, there is currently no commercially available device designed for the cavopulmonary application; and the risks and benefits of implanting a ventricular assist device (VAD) originally designed for the left ventricular application on the right circulation of failing Fontan patients is not yet clear. Moreover, further research is needed to compare the hemodynamics between the two clinically-considered surgical configurations (Full Support and IVC Support) for cavopulmonary assist, with Full and IVC Support corresponding to the entire venous return or only the inferior venous return, respectively, being routed through the VAD. In this study, we used a numerical model of the failing Fontan physiology to evaluate the Fontan hemodynamic response to a left VAD during the IVC and Full supports. We observed that during the Full support the VAD improved the cardiac output while maintaining blood pressures within safe ranges, and lowered the IVC pressure to <15mmHg; however, we found a potential risk of lung damage at higher pump speeds due to the excessive pulmonary pressure elevation. IVC support the other hand, did not benefit the hemodynamics of the example failing Fontan patients, resulting in the superior vena cava pressure increasing to an unsafe level of >20 mmHg. The findings in this study may be helpful to surgeons for recognizing the risks of a cavopulmonary VAD and developing coherent clinical strategies for the implementation of cavopulmonary support. |
2004.14883 | Feng Fu | Elizabeth A. Tripp and Feng Fu and Scott D. Pauls | Evolutionary Kuramoto Dynamics | 38 pages, 2 figures. Comments are welcome | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Common models of synchronizable oscillatory systems consist of a collection
of coupled oscillators governed by a collection of differential equations. The
ubiquitous Kuramoto models rely on an {\em a priori} fixed connectivity pattern
facilitates mutual communication and influence between oscillators. In
biological synchronizable systems, like the mammalian suprachaismatic nucleus,
enabling communication comes at a cost -- the organism expends energy creating
and maintaining the system -- linking their development to evolutionary
selection. Here, we introduce and analyze a new evolutionary game theoretic
framework modeling the behavior and evolution of systems of coupled
oscillators. Each oscillator in our model is characterized by a pair of dynamic
behavioral traits: an oscillatory phase and whether they connect and
communicate to other oscillators or not. Evolution of the system occurs along
these dimensions, allowing oscillators to change their phases and/or their
communication strategies. We measure success of mutations by comparing the
benefit of phase synchronization to the organism balanced against the cost of
creating and maintaining connections between the oscillators. Despite such a
simple setup, this system exhibits a wealth of nontrivial behaviors, mimicking
different classical games -- the Prisoner's Dilemma, the snowdrift game, and
coordination games -- as the landscape of the oscillators changes over time.
Despite such complexity, we find a surprisingly simple characterization of
synchronization through connectivity and communication: if the benefit of
synchronization $B(0)$ is greater than twice the cost $c$, $B(0) > 2c$, the
organism will evolve towards complete communication and phase synchronization.
Taken together, our model demonstrates possible evolutionary constraints on
both the existence of a synchronized oscillatory system and its overall
connectivity.
| [
{
"created": "Thu, 30 Apr 2020 15:37:12 GMT",
"version": "v1"
}
] | 2020-05-01 | [
[
"Tripp",
"Elizabeth A.",
""
],
[
"Fu",
"Feng",
""
],
[
"Pauls",
"Scott D.",
""
]
] | Common models of synchronizable oscillatory systems consist of a collection of coupled oscillators governed by a collection of differential equations. The ubiquitous Kuramoto models rely on an {\em a priori} fixed connectivity pattern facilitates mutual communication and influence between oscillators. In biological synchronizable systems, like the mammalian suprachaismatic nucleus, enabling communication comes at a cost -- the organism expends energy creating and maintaining the system -- linking their development to evolutionary selection. Here, we introduce and analyze a new evolutionary game theoretic framework modeling the behavior and evolution of systems of coupled oscillators. Each oscillator in our model is characterized by a pair of dynamic behavioral traits: an oscillatory phase and whether they connect and communicate to other oscillators or not. Evolution of the system occurs along these dimensions, allowing oscillators to change their phases and/or their communication strategies. We measure success of mutations by comparing the benefit of phase synchronization to the organism balanced against the cost of creating and maintaining connections between the oscillators. Despite such a simple setup, this system exhibits a wealth of nontrivial behaviors, mimicking different classical games -- the Prisoner's Dilemma, the snowdrift game, and coordination games -- as the landscape of the oscillators changes over time. Despite such complexity, we find a surprisingly simple characterization of synchronization through connectivity and communication: if the benefit of synchronization $B(0)$ is greater than twice the cost $c$, $B(0) > 2c$, the organism will evolve towards complete communication and phase synchronization. Taken together, our model demonstrates possible evolutionary constraints on both the existence of a synchronized oscillatory system and its overall connectivity. |
2202.00861 | Weijiu Liu | Weijiu Liu | An Age-dependent Feedback Control Model for Calcium and Reactive Oxygen
Species in Yeast Cells | null | null | null | null | q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | Calcium and reactive oxygen species (ROS) interact with each other and play
an important role in cell signaling networks. Based on the existing
mathematical models, we develop an age-dependent feedback control model to
simulate the interaction. The model consists of three subsystems: cytosolic
calcium dynamics, ROS generation from the respiratory chain in mitochondria,
and mitochondrial energy metabolism. In the model, we hypothesized that ROS
induces calcium release from the yeast endoplasmic reticulum , Golgi apparatus,
and vacuoles, and that ROS damages calmodulin and calcineurin by oxidizing
them. The dependence of calcium uptake by Vcx1p on ATP is incorporated into the
model. The model can approximately reproduce the log phase calcium dynamics.
The simulated interaction between the cytosolic calcium and mitochondrial ROS
shows that an increase in calcium results in a decrease in ROS initially (in
log phase), but the increase-decrease relation is changed to an
increase-increase relation when the cell is getting old. This could accord with
the experimental observation that calcium diminishes ROS from complexes I and
III of the respiratory chain under normal conditions, but enhances ROS when the
complex formations are inhibited. The model predicts that the subsystem of the
calcium regulators Pmc1p, Pmr1p, and Vex1p is stable, controllable, and
observable. These structural properties of the dynamical system could
mathematically confirm that cells have evolved delicate feedback control
mechanisms to maintain their calcium homeostasis.
| [
{
"created": "Wed, 2 Feb 2022 03:02:43 GMT",
"version": "v1"
}
] | 2022-02-03 | [
[
"Liu",
"Weijiu",
""
]
] | Calcium and reactive oxygen species (ROS) interact with each other and play an important role in cell signaling networks. Based on the existing mathematical models, we develop an age-dependent feedback control model to simulate the interaction. The model consists of three subsystems: cytosolic calcium dynamics, ROS generation from the respiratory chain in mitochondria, and mitochondrial energy metabolism. In the model, we hypothesized that ROS induces calcium release from the yeast endoplasmic reticulum , Golgi apparatus, and vacuoles, and that ROS damages calmodulin and calcineurin by oxidizing them. The dependence of calcium uptake by Vcx1p on ATP is incorporated into the model. The model can approximately reproduce the log phase calcium dynamics. The simulated interaction between the cytosolic calcium and mitochondrial ROS shows that an increase in calcium results in a decrease in ROS initially (in log phase), but the increase-decrease relation is changed to an increase-increase relation when the cell is getting old. This could accord with the experimental observation that calcium diminishes ROS from complexes I and III of the respiratory chain under normal conditions, but enhances ROS when the complex formations are inhibited. The model predicts that the subsystem of the calcium regulators Pmc1p, Pmr1p, and Vex1p is stable, controllable, and observable. These structural properties of the dynamical system could mathematically confirm that cells have evolved delicate feedback control mechanisms to maintain their calcium homeostasis. |
2303.07683 | Javier Diaz | Javier D\'iaz, Hiroyasu Ando, GoEun Han, Olga Malyshevskaya, Xifang
Hayashi, Juan-Carlos Letelier, Masashi Yanagisawa, Kaspar E. Vogt | Recovering Arrhythmic EEG Transients from Their Stochastic Interference | Original research manuscript in PDF format, 46 pages long, with 13
figures and one table | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Traditionally, the neuronal dynamics underlying electroencephalograms (EEG)
have been understood as arising from \textit{rhythmic oscillators with varying
degrees of synchronization}. This dominant metaphor employs frequency domain
EEG analysis to identify the most prominent populations of neuronal current
sources in terms of their frequency and spectral power. However, emerging
perspectives on EEG highlight its arrhythmic nature, which is primarily
inferred from broadband EEG properties like the ubiquitous $1/f$ spectrum. In
the present study, we use an \textit{arrhythmic superposition of pulses} as a
metaphor to explain the origin of EEG. This conceptualization has a fundamental
problem because the interference produced by the superpositions of pulses
generates colored Gaussian noise, masking the temporal profile of the
generating pulse. We solved this problem by developing a mathematical method
involving the derivative of the autocovariance function to recover excellent
approximations of the underlying pulses, significantly extending the analysis
of this type of stochastic processes. When the method is applied to spontaneous
mouse EEG sampled at $5$ kHz during the sleep-wake cycle, specific patterns --
called $\Psi$-patterns -- characterizing NREM sleep, REM sleep, and wakefulness
are revealed. $\Psi$-patterns can be understood theoretically as \textit{power
density in the time domain} and correspond to combinations of generating pulses
at different time scales. Remarkably, we report the first EEG
wakefulness-specific feature, which corresponds to an ultra-fast ($\sim 1$ ms)
transient component of the observed patterns. By shifting the paradigm of EEG
genesis from oscillators to random pulse generators, our theoretical framework
pushes the boundaries of traditional Fourier-based EEG analysis, paving the way
for new insights into the arrhythmic components of neural dynamics.
| [
{
"created": "Tue, 14 Mar 2023 07:53:28 GMT",
"version": "v1"
}
] | 2023-03-15 | [
[
"Díaz",
"Javier",
""
],
[
"Ando",
"Hiroyasu",
""
],
[
"Han",
"GoEun",
""
],
[
"Malyshevskaya",
"Olga",
""
],
[
"Hayashi",
"Xifang",
""
],
[
"Letelier",
"Juan-Carlos",
""
],
[
"Yanagisawa",
"Masashi",
""
]... | Traditionally, the neuronal dynamics underlying electroencephalograms (EEG) have been understood as arising from \textit{rhythmic oscillators with varying degrees of synchronization}. This dominant metaphor employs frequency domain EEG analysis to identify the most prominent populations of neuronal current sources in terms of their frequency and spectral power. However, emerging perspectives on EEG highlight its arrhythmic nature, which is primarily inferred from broadband EEG properties like the ubiquitous $1/f$ spectrum. In the present study, we use an \textit{arrhythmic superposition of pulses} as a metaphor to explain the origin of EEG. This conceptualization has a fundamental problem because the interference produced by the superpositions of pulses generates colored Gaussian noise, masking the temporal profile of the generating pulse. We solved this problem by developing a mathematical method involving the derivative of the autocovariance function to recover excellent approximations of the underlying pulses, significantly extending the analysis of this type of stochastic processes. When the method is applied to spontaneous mouse EEG sampled at $5$ kHz during the sleep-wake cycle, specific patterns -- called $\Psi$-patterns -- characterizing NREM sleep, REM sleep, and wakefulness are revealed. $\Psi$-patterns can be understood theoretically as \textit{power density in the time domain} and correspond to combinations of generating pulses at different time scales. Remarkably, we report the first EEG wakefulness-specific feature, which corresponds to an ultra-fast ($\sim 1$ ms) transient component of the observed patterns. By shifting the paradigm of EEG genesis from oscillators to random pulse generators, our theoretical framework pushes the boundaries of traditional Fourier-based EEG analysis, paving the way for new insights into the arrhythmic components of neural dynamics. |
2105.05520 | Leonardo Trujillo | Leonardo Trujillo, Paul Banse, Guillaume Beslon | Simulating short- and long-term evolutionary dynamics on rugged
landscapes | 9 pages, 6 figures, The ALIFE conference 2021 | null | null | null | q-bio.PE cond-mat.dis-nn nlin.AO | http://creativecommons.org/licenses/by/4.0/ | We propose a minimal model to simulate long waiting times followed by
evolutionary bursts on rugged landscapes. It combines point and inversions-like
mutations as sources of genetic variation. The inversions are intended to
simulate one of the main chromosomal rearrangements. Using the well-known
family of NK fitness landscapes, we simulate random adaptive walks, i.e.
successive mutational events constrained to incremental fitness selection. We
report the emergence of different time scales: a short-term dynamics mainly
driven by point mutations, followed by a long-term (stasis-like) waiting period
until a new mutation arises. This new mutation is an inversion which can
trigger a burst of successive point mutations, and then drives the system to
new short-term increasing-fitness period. We analyse the effect of genes
epistatic interactions on the evolutionary time scales. We suggest that the
present model mimics the process of evolutionary innovation and punctuated
equilibrium.
| [
{
"created": "Wed, 12 May 2021 08:56:01 GMT",
"version": "v1"
}
] | 2021-05-13 | [
[
"Trujillo",
"Leonardo",
""
],
[
"Banse",
"Paul",
""
],
[
"Beslon",
"Guillaume",
""
]
] | We propose a minimal model to simulate long waiting times followed by evolutionary bursts on rugged landscapes. It combines point and inversions-like mutations as sources of genetic variation. The inversions are intended to simulate one of the main chromosomal rearrangements. Using the well-known family of NK fitness landscapes, we simulate random adaptive walks, i.e. successive mutational events constrained to incremental fitness selection. We report the emergence of different time scales: a short-term dynamics mainly driven by point mutations, followed by a long-term (stasis-like) waiting period until a new mutation arises. This new mutation is an inversion which can trigger a burst of successive point mutations, and then drives the system to new short-term increasing-fitness period. We analyse the effect of genes epistatic interactions on the evolutionary time scales. We suggest that the present model mimics the process of evolutionary innovation and punctuated equilibrium. |
1811.07623 | Samir Suweis Dr. | Niccolo Anceschi, Jorge Hidalgo, Tommaso Bellini, Amos Maritan and
Samir Suweis | How neutral and niche forces contribute to speciation processes? | 8 pages, 5 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The evolutionary and ecological processes behind the origin of species are
among the most fundamental problems in biology. In fact, many theoretical
hypothesis on different type of speciation have been proposed. In particular,
models of sympatric speciation leading to the formation of new species without
geographical isolation, are based on the niche hypothesis: the diversification
of the population is induced by the competition for a limited set of the
available resources. On the other hand, neutral models of evolution have shown
that stochastic forces are sufficient to generate coexistence of different
species. In this work, we bring this dichotomy to the context of species
formation, and we study how neutral and niche forces contribute to sympatric
speciation in a model ecosystem. In particular, we study the evolution of a
population of individuals with asexual reproduction whose inherited characters
or phenotypes are specified by both niche-based and neutral traits. We analyse
the stationary state of the dynamics, and study the distribution of individuals
in the whole space of possible phenotypes. We show, both by numerical
simulations and analytics, that there is a non-trivial coupling between neutral
and niche forces induced by stochastic effects in the evolution of the
population that allows the formation of clusters (i.e., species) in the
phenotypic space. Our framework can be generalised also to sexual reproduction
or other type of population dynamics.
| [
{
"created": "Mon, 19 Nov 2018 11:32:24 GMT",
"version": "v1"
}
] | 2018-11-20 | [
[
"Anceschi",
"Niccolo",
""
],
[
"Hidalgo",
"Jorge",
""
],
[
"Bellini",
"Tommaso",
""
],
[
"Maritan",
"Amos",
""
],
[
"Suweis",
"Samir",
""
]
] | The evolutionary and ecological processes behind the origin of species are among the most fundamental problems in biology. In fact, many theoretical hypothesis on different type of speciation have been proposed. In particular, models of sympatric speciation leading to the formation of new species without geographical isolation, are based on the niche hypothesis: the diversification of the population is induced by the competition for a limited set of the available resources. On the other hand, neutral models of evolution have shown that stochastic forces are sufficient to generate coexistence of different species. In this work, we bring this dichotomy to the context of species formation, and we study how neutral and niche forces contribute to sympatric speciation in a model ecosystem. In particular, we study the evolution of a population of individuals with asexual reproduction whose inherited characters or phenotypes are specified by both niche-based and neutral traits. We analyse the stationary state of the dynamics, and study the distribution of individuals in the whole space of possible phenotypes. We show, both by numerical simulations and analytics, that there is a non-trivial coupling between neutral and niche forces induced by stochastic effects in the evolution of the population that allows the formation of clusters (i.e., species) in the phenotypic space. Our framework can be generalised also to sexual reproduction or other type of population dynamics. |
2204.06939 | Chen Wang | Chen Wang, Sida Chen, Liang Huang and Lianchun Yu | Prediction and Control of Focal Seizure Spread: Random Walk with Restart
on Heterogeneous Brain Networks | null | null | 10.1103/PhysRevE.105.064412 | null | q-bio.NC physics.bio-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Whole-brain models offer a promising method of predicting seizure spread,
which is critical for successful surgery treatment of focal epilepsy. Existing
methods are largely based on structural connectome, which ignores the effects
of heterogeneity in regional excitability of brains. In this study, we used a
whole-brain model to show that heterogeneity in nodal excitability had a
significant impact on seizure propagation in the networks, and compromised the
prediction accuracy with structural connections. We then addressed this problem
with an algorithm based on random walk with restart on graphs. We demonstrated
that by establishing a relationship between the restarting probability and the
excitability for each node, this algorithm could significantly improve the
seizure spread prediction accuracy in heterogeneous networks, and was more
robust against the extent of heterogeneity. We also strategized surgical
seizure control as a process to identify and remove the key nodes (connections)
responsible for the early spread of seizures from the focal region. Compared to
strategies based on structural connections, virtual surgery with a strategy
based on mRWER generated outcomes with a high success rate while maintaining
low damage to the brain by removing fewer anatomical connections. These
findings may have potential applications in developing personalized surgery
strategies for epilepsy.
| [
{
"created": "Thu, 14 Apr 2022 13:13:32 GMT",
"version": "v1"
}
] | 2022-07-13 | [
[
"Wang",
"Chen",
""
],
[
"Chen",
"Sida",
""
],
[
"Huang",
"Liang",
""
],
[
"Yu",
"Lianchun",
""
]
] | Whole-brain models offer a promising method of predicting seizure spread, which is critical for successful surgery treatment of focal epilepsy. Existing methods are largely based on structural connectome, which ignores the effects of heterogeneity in regional excitability of brains. In this study, we used a whole-brain model to show that heterogeneity in nodal excitability had a significant impact on seizure propagation in the networks, and compromised the prediction accuracy with structural connections. We then addressed this problem with an algorithm based on random walk with restart on graphs. We demonstrated that by establishing a relationship between the restarting probability and the excitability for each node, this algorithm could significantly improve the seizure spread prediction accuracy in heterogeneous networks, and was more robust against the extent of heterogeneity. We also strategized surgical seizure control as a process to identify and remove the key nodes (connections) responsible for the early spread of seizures from the focal region. Compared to strategies based on structural connections, virtual surgery with a strategy based on mRWER generated outcomes with a high success rate while maintaining low damage to the brain by removing fewer anatomical connections. These findings may have potential applications in developing personalized surgery strategies for epilepsy. |
1804.03951 | Lorenzo Contento | Lorenzo Contento, Masayasu Mimura | Complex pattern formation driven by the interaction of stable fronts in
a competition-diffusion system | null | null | null | null | q-bio.PE nlin.PS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ecological invasion problem in which a weaker exotic species invades an
ecosystem inhabited by two strongly competing native species is modelled by a
three-species competition-diffusion system. It is known that for a certain
range of parameter values competitor-mediated coexistence occurs and complex
spatio-temporal patterns are observed in two spatial dimensions. In this paper
we uncover the mechanism which generates such patterns. Under some assumptions
on the parameters the three-species competition-diffusion system admits two
planarly stable travelling waves. Their interaction in one spatial dimension
may result in either reflection or merging into a single homoclinic wave,
depending on the strength of the invading species. This transition can be
understood by studying the bifurcation structure of the homoclinic wave. In
particular, a time-periodic homoclinic wave (breathing wave) is born from a
Hopf bifurcation and its unstable branch acts as a separator between the
reflection and merging regimes. The same transition occurs in two spatial
dimensions: the stable regular spiral associated to the homoclinic wave
destabilizes, giving rise first to an oscillating breathing spiral and then
breaking up producing a dynamic pattern characterized by many spiral cores. We
find that these complex patterns are generated by the interaction of two
planarly stable travelling waves, in contrast with many other well known cases
of pattern formation where planar instability plays a central role.
| [
{
"created": "Wed, 11 Apr 2018 12:10:50 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Oct 2018 13:34:09 GMT",
"version": "v2"
}
] | 2018-11-01 | [
[
"Contento",
"Lorenzo",
""
],
[
"Mimura",
"Masayasu",
""
]
] | The ecological invasion problem in which a weaker exotic species invades an ecosystem inhabited by two strongly competing native species is modelled by a three-species competition-diffusion system. It is known that for a certain range of parameter values competitor-mediated coexistence occurs and complex spatio-temporal patterns are observed in two spatial dimensions. In this paper we uncover the mechanism which generates such patterns. Under some assumptions on the parameters the three-species competition-diffusion system admits two planarly stable travelling waves. Their interaction in one spatial dimension may result in either reflection or merging into a single homoclinic wave, depending on the strength of the invading species. This transition can be understood by studying the bifurcation structure of the homoclinic wave. In particular, a time-periodic homoclinic wave (breathing wave) is born from a Hopf bifurcation and its unstable branch acts as a separator between the reflection and merging regimes. The same transition occurs in two spatial dimensions: the stable regular spiral associated to the homoclinic wave destabilizes, giving rise first to an oscillating breathing spiral and then breaking up producing a dynamic pattern characterized by many spiral cores. We find that these complex patterns are generated by the interaction of two planarly stable travelling waves, in contrast with many other well known cases of pattern formation where planar instability plays a central role. |
1602.03214 | Julia Walk | Julia C. Walk, Bruce P. Ayati, Sarah A. Holstein | Modeling the Effects of Multiple Myeloma on Kidney Function | Included version of model without tumor with steady-state analysis,
corrected equations for free light chains and renal fibroblasts in model with
tumor to reflect steady-state analysis, updated abstract, updated and added
references | null | 10.1038/s41598-018-38129-7 | null | q-bio.TO q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multiple myeloma (MM), a plasma cell cancer, is associated with many health
challenges, including damage to the kidney by tubulointerstitial fibrosis. We
develop a mathematical model which captures the qualitative behavior of the
cell and protein populations involved. Specifically, we model the interaction
between cells in the proximal tubule of the kidney, free light chains, renal
fibroblasts, and myeloma cells. We analyze the model for steady-state solutions
to find a mathematically and biologically relevant stable steady-state
solution. This foundational model provides a representation of dynamics between
key populations in tubulointerstitial fibrosis that demonstrates how these
populations interact to affect patient prognosis in patients with MM and renal
impairment.
| [
{
"created": "Tue, 9 Feb 2016 22:33:19 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Jul 2018 16:12:48 GMT",
"version": "v2"
}
] | 2023-02-14 | [
[
"Walk",
"Julia C.",
""
],
[
"Ayati",
"Bruce P.",
""
],
[
"Holstein",
"Sarah A.",
""
]
] | Multiple myeloma (MM), a plasma cell cancer, is associated with many health challenges, including damage to the kidney by tubulointerstitial fibrosis. We develop a mathematical model which captures the qualitative behavior of the cell and protein populations involved. Specifically, we model the interaction between cells in the proximal tubule of the kidney, free light chains, renal fibroblasts, and myeloma cells. We analyze the model for steady-state solutions to find a mathematically and biologically relevant stable steady-state solution. This foundational model provides a representation of dynamics between key populations in tubulointerstitial fibrosis that demonstrates how these populations interact to affect patient prognosis in patients with MM and renal impairment. |
2104.02957 | Ines Pereira | In\^es Pereira, Stefan Fr\"assle, Jakob Heinzle, Dario Sch\"obi, Cao
Tri Do, Moritz Gruber, Klaas E. Stephan | Conductance-based Dynamic Causal Modeling: A mathematical review of its
application to cross-power spectral densities | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dynamic Causal Modeling (DCM) is a Bayesian framework for inferring on hidden
(latent) neuronal states, based on measurements of brain activity. Since its
introduction in 2003 for functional magnetic resonance imaging data, DCM has
been extended to electrophysiological data, and several variants have been
developed. Their biophysically motivated formulations make these models
promising candidates for providing a mechanistic understanding of human brain
dynamics, both in health and disease. However, due to their complexity and
reliance on concepts from several fields, fully understanding the mathematical
and conceptual basis behind certain variants of DCM can be challenging. At the
same time, a solid theoretical knowledge of the models is crucial to avoid
pitfalls in the application of these models and interpretation of their
results. In this paper, we focus on one of the most advanced formulations of
DCM, i.e. conductance-based DCM for cross-spectral densities, whose components
are described across multiple technical papers. The aim of the present article
is to provide an accessible exposition of the mathematical background, together
with an illustration of the model's behavior. To this end, we include
step-by-step derivations of the model equations, point to important aspects in
the software implementation of those models, and use simulations to provide an
intuitive understanding of the type of responses that can be generated and the
role that specific parameters play in the model. Furthermore, all code utilized
for our simulations is made publicly available alongside the manuscript to
allow readers an easy hands-on experience with conductance-based DCM.
| [
{
"created": "Wed, 7 Apr 2021 07:17:00 GMT",
"version": "v1"
}
] | 2021-04-08 | [
[
"Pereira",
"Inês",
""
],
[
"Frässle",
"Stefan",
""
],
[
"Heinzle",
"Jakob",
""
],
[
"Schöbi",
"Dario",
""
],
[
"Do",
"Cao Tri",
""
],
[
"Gruber",
"Moritz",
""
],
[
"Stephan",
"Klaas E.",
""
]
] | Dynamic Causal Modeling (DCM) is a Bayesian framework for inferring on hidden (latent) neuronal states, based on measurements of brain activity. Since its introduction in 2003 for functional magnetic resonance imaging data, DCM has been extended to electrophysiological data, and several variants have been developed. Their biophysically motivated formulations make these models promising candidates for providing a mechanistic understanding of human brain dynamics, both in health and disease. However, due to their complexity and reliance on concepts from several fields, fully understanding the mathematical and conceptual basis behind certain variants of DCM can be challenging. At the same time, a solid theoretical knowledge of the models is crucial to avoid pitfalls in the application of these models and interpretation of their results. In this paper, we focus on one of the most advanced formulations of DCM, i.e. conductance-based DCM for cross-spectral densities, whose components are described across multiple technical papers. The aim of the present article is to provide an accessible exposition of the mathematical background, together with an illustration of the model's behavior. To this end, we include step-by-step derivations of the model equations, point to important aspects in the software implementation of those models, and use simulations to provide an intuitive understanding of the type of responses that can be generated and the role that specific parameters play in the model. Furthermore, all code utilized for our simulations is made publicly available alongside the manuscript to allow readers an easy hands-on experience with conductance-based DCM. |
0810.5198 | Chandrasekar Kuppusamy | Jane H. Sheeba, Aneta Stefanovska and Peter V. E. McClintock | Neuronal synchrony during anaesthesia - A thalamocortical model | 18 pages, 3 figures | Biophys. J., 95(6), 2722-2727, 2008 | 10.1529/biophysj.108.134635 | null | q-bio.NC q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is growing evidence in favour of the temporal-coding hypothesis that
temporal correlation of neuronal discharges may serve to bind distributed
neuronal activity into unique representations and, in particular, that $\theta$
(3.5-7.5 Hz) and $\delta$ ($0.5<$3.5 Hz) oscillations facilitate information
coding. The $\theta$ and $\delta$ rhythms are shown to be involved in various
sleep stages, and during an{\ae}sthesia, and they undergo changes with the
depth of an{\ae}sthesia. We introduce a thalamocortical model of interacting
neuronal ensembles to describe phase relationships between $\theta$ and
$\delta$ oscillations, especially during deep and light an{\ae}sthesia.
Asymmetric and long range interactions among the thalamocortical neuronal
oscillators are taken into account. The model results are compared with the
experimental observations of Musizza et al. {\it J. Physiol. (London)} 2007
580:315-326. The $\delta$ and $\theta$ activities are found to be separately
generated and are governed by the thalamus and cortex respectively. Changes in
the degree of intra--ensemble and inter--ensemble synchrony imply that the
neuronal ensembles inhibit information coding during deep an{\ae}sthesia and
facilitate it during light an{\ae}sthesia.
| [
{
"created": "Wed, 29 Oct 2008 05:59:07 GMT",
"version": "v1"
}
] | 2008-10-30 | [
[
"Sheeba",
"Jane H.",
""
],
[
"Stefanovska",
"Aneta",
""
],
[
"McClintock",
"Peter V. E.",
""
]
] | There is growing evidence in favour of the temporal-coding hypothesis that temporal correlation of neuronal discharges may serve to bind distributed neuronal activity into unique representations and, in particular, that $\theta$ (3.5-7.5 Hz) and $\delta$ ($0.5<$3.5 Hz) oscillations facilitate information coding. The $\theta$ and $\delta$ rhythms are shown to be involved in various sleep stages, and during an{\ae}sthesia, and they undergo changes with the depth of an{\ae}sthesia. We introduce a thalamocortical model of interacting neuronal ensembles to describe phase relationships between $\theta$ and $\delta$ oscillations, especially during deep and light an{\ae}sthesia. Asymmetric and long range interactions among the thalamocortical neuronal oscillators are taken into account. The model results are compared with the experimental observations of Musizza et al. {\it J. Physiol. (London)} 2007 580:315-326. The $\delta$ and $\theta$ activities are found to be separately generated and are governed by the thalamus and cortex respectively. Changes in the degree of intra--ensemble and inter--ensemble synchrony imply that the neuronal ensembles inhibit information coding during deep an{\ae}sthesia and facilitate it during light an{\ae}sthesia. |
1203.3929 | Tatiana Tatarinova | Eran Elhaik and Tatiana Tatarinova | GC3 Biology in Eukaryotes and Prokaryotes | A chapter of DNA Methylation
(http://cdn.intechopen.com/pdfs/32799/InTech-Gc3_biology_in_eukaryotes_and_prokaryotes.pdf) | DNA Methylation - From Genomics to Technology, 2012, pp 55-68 | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe the distribution of Guanine and Cytosine (GC) content in the
third codon position (GC3) distributions in different species, analyze
evolutionary trends and discuss differences between genes and organisms with
distinct GC3 levels. We scrutinize previously published theoretical frameworks
and construct a unified view of GC3 biology in eukaryotes and prokaryotes.
| [
{
"created": "Sun, 18 Mar 2012 07:55:36 GMT",
"version": "v1"
}
] | 2012-03-20 | [
[
"Elhaik",
"Eran",
""
],
[
"Tatarinova",
"Tatiana",
""
]
] | We describe the distribution of Guanine and Cytosine (GC) content in the third codon position (GC3) distributions in different species, analyze evolutionary trends and discuss differences between genes and organisms with distinct GC3 levels. We scrutinize previously published theoretical frameworks and construct a unified view of GC3 biology in eukaryotes and prokaryotes. |
2405.14536 | Li Kun | Kun Li, Xiuwen Gong, Shirui Pan, Jia Wu, Bo Du, Wenbin Hu | Regressor-free Molecule Generation to Support Drug Response Prediction | 22 pages, 7 figures, 9 tables, | null | null | null | q-bio.MN cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Drug response prediction (DRP) is a crucial phase in drug discovery, and the
most important metric for its evaluation is the IC50 score. DRP results are
heavily dependent on the quality of the generated molecules. Existing molecule
generation methods typically employ classifier-based guidance, enabling
sampling within the IC50 classification range. However, these methods fail to
ensure the sampling space range's effectiveness, generating numerous
ineffective molecules. Through experimental and theoretical study, we
hypothesize that conditional generation based on the target IC50 score can
obtain a more effective sampling space. As a result, we introduce
regressor-free guidance molecule generation to ensure sampling within a more
effective space and support DRP. Regressor-free guidance combines a diffusion
model's score estimation with a regression controller model's gradient based on
number labels. To effectively map regression labels between drugs and cell
lines, we design a common-sense numerical knowledge graph that constrains the
order of text representations. Experimental results on the real-world dataset
for the DRP task demonstrate our method's effectiveness in drug discovery. The
code is available at:https://anonymous.4open.science/r/RMCD-DBD1.
| [
{
"created": "Thu, 23 May 2024 13:22:17 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Li",
"Kun",
""
],
[
"Gong",
"Xiuwen",
""
],
[
"Pan",
"Shirui",
""
],
[
"Wu",
"Jia",
""
],
[
"Du",
"Bo",
""
],
[
"Hu",
"Wenbin",
""
]
] | Drug response prediction (DRP) is a crucial phase in drug discovery, and the most important metric for its evaluation is the IC50 score. DRP results are heavily dependent on the quality of the generated molecules. Existing molecule generation methods typically employ classifier-based guidance, enabling sampling within the IC50 classification range. However, these methods fail to ensure the sampling space range's effectiveness, generating numerous ineffective molecules. Through experimental and theoretical study, we hypothesize that conditional generation based on the target IC50 score can obtain a more effective sampling space. As a result, we introduce regressor-free guidance molecule generation to ensure sampling within a more effective space and support DRP. Regressor-free guidance combines a diffusion model's score estimation with a regression controller model's gradient based on number labels. To effectively map regression labels between drugs and cell lines, we design a common-sense numerical knowledge graph that constrains the order of text representations. Experimental results on the real-world dataset for the DRP task demonstrate our method's effectiveness in drug discovery. The code is available at:https://anonymous.4open.science/r/RMCD-DBD1. |
0804.1201 | Jean-Charles Boisson | Jean-Charles Boisson (LIFL, INRIA Lille - Nord Europe), Laetitia
Jourdan (LIFL, INRIA Lille - Nord Europe), El-Ghazali Talbi (INRIA Futurs),
Christian Rolando (LCOM) | Protein Sequencing with an Adaptive Genetic Algorithm from Tandem Mass
Spectrometry | null | Dans CEC 2006 (2006) | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In Proteomics, only the de novo peptide sequencing approach allows a partial
amino acid sequence of a peptide to be found from a MS/MS spectrum. In this
article a preliminary work is presented to discover a complete protein sequence
from spectral data (MS and MS/MS spectra). For the moment, our approach only
uses MS spectra. A Genetic Algorithm (GA) has been designed with a new
evaluation function which works directly with a complete MS spectrum as input
and not with a mass list like the other methods using this kind of data. Thus
the mono isotopic peak extraction step which needs a human intervention is
deleted. The goal of this approach is to discover the sequence of unknown
proteins and to allow a better understanding of the differences between
experimental proteins and proteins from databases.
| [
{
"created": "Tue, 8 Apr 2008 07:48:07 GMT",
"version": "v1"
}
] | 2008-12-18 | [
[
"Boisson",
"Jean-Charles",
"",
"LIFL, INRIA Lille - Nord Europe"
],
[
"Jourdan",
"Laetitia",
"",
"LIFL, INRIA Lille - Nord Europe"
],
[
"Talbi",
"El-Ghazali",
"",
"INRIA Futurs"
],
[
"Rolando",
"Christian",
"",
"LCOM"
]
] | In Proteomics, only the de novo peptide sequencing approach allows a partial amino acid sequence of a peptide to be found from a MS/MS spectrum. In this article a preliminary work is presented to discover a complete protein sequence from spectral data (MS and MS/MS spectra). For the moment, our approach only uses MS spectra. A Genetic Algorithm (GA) has been designed with a new evaluation function which works directly with a complete MS spectrum as input and not with a mass list like the other methods using this kind of data. Thus the mono isotopic peak extraction step which needs a human intervention is deleted. The goal of this approach is to discover the sequence of unknown proteins and to allow a better understanding of the differences between experimental proteins and proteins from databases. |
1601.07063 | Swadhin Taneja Dr. | Swadhin Taneja, Arnold B. Mitnitski, Kenneth Rockwood and Andrew D.
Rutenberg | A dynamical network model for age-related health deficits and mortality | 12 pages, 38 figures | Phys. Rev. E 93, 022309 (2016) | 10.1103/PhysRevE.93.022309 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How long people live depends on their health, and how it changes with age.
Individual health can be tracked by the accumulation of age-related health
deficits. The fraction of age-related deficits is a simple quantitative measure
of human aging. This quantitative frailty index (F) is as good as chronological
age in predicting mortality. In this paper, we use a dynamical network model of
deficits to explore the effects of interactions between deficits, deficit
damage and repair processes, and the connection between the F and mortality.
With our model, we qualitatively reproduce Gompertz's law of increasing human
mortality with age, the broadening of the F distribution with age, the
characteristic non-linear increase of the F with age, and the increased
mortality of high-frailty individuals. No explicit time-dependence in damage or
repair rates is needed in our model. Instead, implicit time-dependence arises
through deficit interactions -- so that the average deficit damage rates
increases, and deficit repair rates decreases, with age . We use a simple
mortality criterion, where mortality occurs when the most connected node is
damaged.
| [
{
"created": "Tue, 26 Jan 2016 15:16:13 GMT",
"version": "v1"
}
] | 2016-03-23 | [
[
"Taneja",
"Swadhin",
""
],
[
"Mitnitski",
"Arnold B.",
""
],
[
"Rockwood",
"Kenneth",
""
],
[
"Rutenberg",
"Andrew D.",
""
]
] | How long people live depends on their health, and how it changes with age. Individual health can be tracked by the accumulation of age-related health deficits. The fraction of age-related deficits is a simple quantitative measure of human aging. This quantitative frailty index (F) is as good as chronological age in predicting mortality. In this paper, we use a dynamical network model of deficits to explore the effects of interactions between deficits, deficit damage and repair processes, and the connection between the F and mortality. With our model, we qualitatively reproduce Gompertz's law of increasing human mortality with age, the broadening of the F distribution with age, the characteristic non-linear increase of the F with age, and the increased mortality of high-frailty individuals. No explicit time-dependence in damage or repair rates is needed in our model. Instead, implicit time-dependence arises through deficit interactions -- so that the average deficit damage rates increases, and deficit repair rates decreases, with age . We use a simple mortality criterion, where mortality occurs when the most connected node is damaged. |
2105.11626 | Farrukh A. Chishtie | R. Jayatilaka, R. Patel, M. Brar, Y. Tang, N. M. Jisrawi, F. Chishtie,
J. Drozd, S. R. Valluri | A Mathematical Model of COVID-19 Transmission | 12 pages, NN20 Conference proceedings, published version | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Disease transmission is studied through disciplines like epidemiology,
applied mathematics, and statistics. Mathematical simulation models for
transmission have implications in solving public and personal health
challenges. The SIR model uses a compartmental approach including dynamic and
nonlinear behavior of transmission through three factors: susceptible,
infected, and removed (recovered and deceased) individuals. Using the Lambert W
Function, we propose a framework to study solutions of the SIR model. This
demonstrates the applications of COVID-19 transmission data to model the spread
of a real-world disease. Different models of disease including the SIR, SIRmp
and SEIRpqr model are compared with respect to their ability to predict disease
spread. Physical distancing impacts and personal protection equipment use are
discussed with relevance to the COVID-19 spread.
| [
{
"created": "Tue, 25 May 2021 02:45:33 GMT",
"version": "v1"
},
{
"created": "Wed, 26 May 2021 04:08:36 GMT",
"version": "v2"
},
{
"created": "Mon, 11 Oct 2021 21:04:20 GMT",
"version": "v3"
},
{
"created": "Sat, 1 Jan 2022 19:41:05 GMT",
"version": "v4"
}
] | 2022-01-04 | [
[
"Jayatilaka",
"R.",
""
],
[
"Patel",
"R.",
""
],
[
"Brar",
"M.",
""
],
[
"Tang",
"Y.",
""
],
[
"Jisrawi",
"N. M.",
""
],
[
"Chishtie",
"F.",
""
],
[
"Drozd",
"J.",
""
],
[
"Valluri",
"S. R.",
... | Disease transmission is studied through disciplines like epidemiology, applied mathematics, and statistics. Mathematical simulation models for transmission have implications in solving public and personal health challenges. The SIR model uses a compartmental approach including dynamic and nonlinear behavior of transmission through three factors: susceptible, infected, and removed (recovered and deceased) individuals. Using the Lambert W Function, we propose a framework to study solutions of the SIR model. This demonstrates the applications of COVID-19 transmission data to model the spread of a real-world disease. Different models of disease including the SIR, SIRmp and SEIRpqr model are compared with respect to their ability to predict disease spread. Physical distancing impacts and personal protection equipment use are discussed with relevance to the COVID-19 spread. |
1804.05823 | Rodrigo Rocha Pereira | Rodrigo P. Rocha, Loren Ko\c{c}illari, Samir Suweis, Maurizio
Corbetta, and Amos Maritan | Homeostatic plasticity and emergence of functional networks in a
whole-brain model at criticality | Accepted for publication in Scientific Reports | Scientific Reports 8, 15682 (2018) | 10.1038/s41598-018-33923-9 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the relationship between large-scale structural and functional
brain networks remains a crucial issue in modern neuroscience. Recently, there
has been growing interest in investigating the role of homeostatic plasticity
mechanisms, across different spatiotemporal scales, in regulating network
activity and brain functioning against a wide range of environmental conditions
and brain states (e.g., during learning, development, ageing, neurological
diseases). In the present study, we investigate how the inclusion of
homeostatic plasticity in a stochastic whole-brain model, implemented as a
normalization of the incoming node's excitatory input, affects the macroscopic
activity during rest and the formation of functional networks. Importantly, we
address the structure-function relationship both at the group and
individual-based levels. In this work, we show that normalization of the node's
excitatory input improves the correspondence between simulated neural patterns
of the model and various brain functional data. Indeed, we find that the best
match is achieved when the model control parameter is in its critical value and
that normalization minimizes both the variability of the critical points and
neuronal activity patterns among subjects. Therefore, our results suggest that
the inclusion of homeostatic principles lead to more realistic brain activity
consistent with the hallmarks of criticality. Our theoretical framework open
new perspectives in personalized brain modeling with potential applications to
investigate the deviation from criticality due to structural lesions (e.g.
stroke) or brain disorders.
| [
{
"created": "Mon, 16 Apr 2018 17:46:59 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Sep 2018 15:48:01 GMT",
"version": "v2"
}
] | 2018-10-25 | [
[
"Rocha",
"Rodrigo P.",
""
],
[
"Koçillari",
"Loren",
""
],
[
"Suweis",
"Samir",
""
],
[
"Corbetta",
"Maurizio",
""
],
[
"Maritan",
"Amos",
""
]
] | Understanding the relationship between large-scale structural and functional brain networks remains a crucial issue in modern neuroscience. Recently, there has been growing interest in investigating the role of homeostatic plasticity mechanisms, across different spatiotemporal scales, in regulating network activity and brain functioning against a wide range of environmental conditions and brain states (e.g., during learning, development, ageing, neurological diseases). In the present study, we investigate how the inclusion of homeostatic plasticity in a stochastic whole-brain model, implemented as a normalization of the incoming node's excitatory input, affects the macroscopic activity during rest and the formation of functional networks. Importantly, we address the structure-function relationship both at the group and individual-based levels. In this work, we show that normalization of the node's excitatory input improves the correspondence between simulated neural patterns of the model and various brain functional data. Indeed, we find that the best match is achieved when the model control parameter is in its critical value and that normalization minimizes both the variability of the critical points and neuronal activity patterns among subjects. Therefore, our results suggest that the inclusion of homeostatic principles lead to more realistic brain activity consistent with the hallmarks of criticality. Our theoretical framework open new perspectives in personalized brain modeling with potential applications to investigate the deviation from criticality due to structural lesions (e.g. stroke) or brain disorders. |
1301.1740 | Iddo Friedberg | Alexandra M. Schnoes, David C. Ream, Alexander W. Thorman, Patricia C.
Babbitt, Iddo Friedberg | Biases in the Experimental Annotations of Protein Function and their
Effect on Our Understanding of Protein Function Space | Accepted to PLoS Computational Biology. Press embargo applies. v4:
text corrected for style and supplementary material inserted | null | 10.1371/journal.pcbi.1003063 | null | q-bio.GN cs.DL cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ongoing functional annotation of proteins relies upon the work of
curators to capture experimental findings from scientific literature and apply
them to protein sequence and structure data. However, with the increasing use
of high-throughput experimental assays, a small number of experimental studies
dominate the functional protein annotations collected in databases. Here we
investigate just how prevalent is the "few articles -- many proteins"
phenomenon. We examine the experimentally validated annotation of proteins
provided by several groups in the GO Consortium, and show that the distribution
of proteins per published study is exponential, with 0.14% of articles
providing the source of annotations for 25% of the proteins in the UniProt-GOA
compilation. Since each of the dominant articles describes the use of an assay
that can find only one function or a small group of functions, this leads to
substantial biases in what we know about the function of many proteins.
Mass-spectrometry, microscopy and RNAi experiments dominate high throughput
experiments. Consequently, the functional information derived from these
experiments is mostly of the subcellular location of proteins, and of the
participation of proteins in embryonic developmental pathways. For some
organisms, the information provided by different studies overlap by a large
amount. We also show that the information provided by high throughput
experiments is less specific than those provided by low throughput experiments.
Given the experimental techniques available, certain biases in protein function
annotation due to high-throughput experiments are unavoidable. Knowing that
these biases exist and understanding their characteristics and extent is
important for database curators, developers of function annotation programs,
and anyone who uses protein function annotation data to plan experiments.
| [
{
"created": "Wed, 9 Jan 2013 02:48:22 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Jan 2013 18:45:00 GMT",
"version": "v2"
},
{
"created": "Mon, 21 Jan 2013 00:38:44 GMT",
"version": "v3"
},
{
"created": "Thu, 4 Apr 2013 01:50:31 GMT",
"version": "v4"
}
] | 2015-06-12 | [
[
"Schnoes",
"Alexandra M.",
""
],
[
"Ream",
"David C.",
""
],
[
"Thorman",
"Alexander W.",
""
],
[
"Babbitt",
"Patricia C.",
""
],
[
"Friedberg",
"Iddo",
""
]
] | The ongoing functional annotation of proteins relies upon the work of curators to capture experimental findings from scientific literature and apply them to protein sequence and structure data. However, with the increasing use of high-throughput experimental assays, a small number of experimental studies dominate the functional protein annotations collected in databases. Here we investigate just how prevalent is the "few articles -- many proteins" phenomenon. We examine the experimentally validated annotation of proteins provided by several groups in the GO Consortium, and show that the distribution of proteins per published study is exponential, with 0.14% of articles providing the source of annotations for 25% of the proteins in the UniProt-GOA compilation. Since each of the dominant articles describes the use of an assay that can find only one function or a small group of functions, this leads to substantial biases in what we know about the function of many proteins. Mass-spectrometry, microscopy and RNAi experiments dominate high throughput experiments. Consequently, the functional information derived from these experiments is mostly of the subcellular location of proteins, and of the participation of proteins in embryonic developmental pathways. For some organisms, the information provided by different studies overlap by a large amount. We also show that the information provided by high throughput experiments is less specific than those provided by low throughput experiments. Given the experimental techniques available, certain biases in protein function annotation due to high-throughput experiments are unavoidable. Knowing that these biases exist and understanding their characteristics and extent is important for database curators, developers of function annotation programs, and anyone who uses protein function annotation data to plan experiments. |
1804.07110 | Korinna T Allhoff | Tobias Rogge, David Jones, Barbara Drossel, Korinna T. Allhoff | Interplay of spatial dynamics and local adaptation shapes species
lifetime distributions and species-area relationships | Theor Ecol (2019) | null | 10.1007/s12080-019-0410-y | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The distributions of species lifetimes and species in space are related,
since species with good local survival chances have more time to colonize new
habitats and species inhabiting large areas have higher chances to survive
local disturbances. Yet, both distributions have been discussed in mostly
separate communities. Here, we study both patterns simultaneously using a
spatially explicit, evolutionary community assembly approach. We present and
investigate a metacommunity model, consisting of a grid of patches, where each
patch contains a local food web. Species survival depends on predation and
competition interactions, which in turn depend on species body masses as the
key traits. The system evolves due to the migration of species to neighboring
patches, the addition of new species as modifications of existing species, and
local extinction events. The structure of each local food web thus emerges in a
self-organized manner as the highly non-trivial outcome of the relative time
scales of these processes. Our model generates a large variety of complex,
multi-trophic networks and therefore serves as a powerful tool to investigate
ecosystems on long temporal and large spatial scales. We find that the observed
lifetime distributions and species-area relations resemble power laws over
appropriately chosen parameter ranges and thus agree qualitatively with
empirical findings. Moreover, we observe strong finite-size effects, and a
dependence of the relationships on the trophic level of the species. By
comparing our results to simple neutral models found in the literature, we
identify the features that are responsible for the values of the exponents.
| [
{
"created": "Thu, 19 Apr 2018 12:28:44 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Feb 2019 16:02:53 GMT",
"version": "v2"
}
] | 2019-02-18 | [
[
"Rogge",
"Tobias",
""
],
[
"Jones",
"David",
""
],
[
"Drossel",
"Barbara",
""
],
[
"Allhoff",
"Korinna T.",
""
]
] | The distributions of species lifetimes and species in space are related, since species with good local survival chances have more time to colonize new habitats and species inhabiting large areas have higher chances to survive local disturbances. Yet, both distributions have been discussed in mostly separate communities. Here, we study both patterns simultaneously using a spatially explicit, evolutionary community assembly approach. We present and investigate a metacommunity model, consisting of a grid of patches, where each patch contains a local food web. Species survival depends on predation and competition interactions, which in turn depend on species body masses as the key traits. The system evolves due to the migration of species to neighboring patches, the addition of new species as modifications of existing species, and local extinction events. The structure of each local food web thus emerges in a self-organized manner as the highly non-trivial outcome of the relative time scales of these processes. Our model generates a large variety of complex, multi-trophic networks and therefore serves as a powerful tool to investigate ecosystems on long temporal and large spatial scales. We find that the observed lifetime distributions and species-area relations resemble power laws over appropriately chosen parameter ranges and thus agree qualitatively with empirical findings. Moreover, we observe strong finite-size effects, and a dependence of the relationships on the trophic level of the species. By comparing our results to simple neutral models found in the literature, we identify the features that are responsible for the values of the exponents. |
1503.04572 | J.H. van Hateren | J.H. van Hateren | Extensive fitness and human cooperation | Removed minor typo in axis Fig. 2a; 18 pages, 5 figures; in press | Theory in Biosciences 134, 127-142 (2015) | 10.1007/s12064-015-0214-6 | null | q-bio.PE q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evolution depends on the fitness of organisms, the expected rate of
reproducing. Directly getting offspring is the most basic form of fitness, but
fitness can also be increased indirectly by helping genetically related
individuals (such as kin) to increase their fitness. The combined effect is
known as inclusive fitness. Here it is argued that a further elaboration of
fitness has evolved, particularly in humans. It is called extensive fitness and
it incorporates producing organisms that are merely similar in phenotype. The
evolvability of this mechanism is illustrated by computations on a simple model
combining heredity and behaviour. Phenotypes are driven into the direction of
high fitness through a mechanism that involves an internal estimate of fitness,
implicitly made within the organism itself. This mechanism has recently been
conjectured to be responsible for producing agency and goals. In the model,
inclusive and extensive fitness are both implemented by letting fitness
increase nonlinearly with the size of subpopulations of similar heredity (for
the indirect part of inclusive fitness) and of similar phenotype (for the
phenotypic part of extensive fitness). Populations implementing extensive
fitness outcompete populations implementing mere inclusive fitness. This occurs
because groups with similar phenotype tend to be larger than groups with
similar heredity, and fitness increases more when groups are larger. Extensive
fitness has two components, a direct component where individuals compete in
inducing others to become like them and an indirect component where individuals
cooperate and help others who are already similar to them.
| [
{
"created": "Mon, 16 Mar 2015 09:07:21 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Jul 2015 06:29:05 GMT",
"version": "v2"
},
{
"created": "Tue, 25 Aug 2015 13:21:29 GMT",
"version": "v3"
},
{
"created": "Thu, 10 Sep 2015 12:36:53 GMT",
"version": "v4"
}
] | 2015-12-17 | [
[
"van Hateren",
"J. H.",
""
]
] | Evolution depends on the fitness of organisms, the expected rate of reproducing. Directly getting offspring is the most basic form of fitness, but fitness can also be increased indirectly by helping genetically related individuals (such as kin) to increase their fitness. The combined effect is known as inclusive fitness. Here it is argued that a further elaboration of fitness has evolved, particularly in humans. It is called extensive fitness and it incorporates producing organisms that are merely similar in phenotype. The evolvability of this mechanism is illustrated by computations on a simple model combining heredity and behaviour. Phenotypes are driven into the direction of high fitness through a mechanism that involves an internal estimate of fitness, implicitly made within the organism itself. This mechanism has recently been conjectured to be responsible for producing agency and goals. In the model, inclusive and extensive fitness are both implemented by letting fitness increase nonlinearly with the size of subpopulations of similar heredity (for the indirect part of inclusive fitness) and of similar phenotype (for the phenotypic part of extensive fitness). Populations implementing extensive fitness outcompete populations implementing mere inclusive fitness. This occurs because groups with similar phenotype tend to be larger than groups with similar heredity, and fitness increases more when groups are larger. Extensive fitness has two components, a direct component where individuals compete in inducing others to become like them and an indirect component where individuals cooperate and help others who are already similar to them. |
1806.01823 | Luis Sanchez Giraldo | Luis Gonzalo Sanchez Giraldo and Odelia Schwartz | Integrating Flexible Normalization into Mid-Level Representations of
Deep Convolutional Neural Networks | null | null | null | null | q-bio.NC cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep convolutional neural networks (CNNs) are becoming increasingly popular
models to predict neural responses in visual cortex. However, contextual
effects, which are prevalent in neural processing and in perception, are not
explicitly handled by current CNNs, including those used for neural prediction.
In primary visual cortex, neural responses are modulated by stimuli spatially
surrounding the classical receptive field in rich ways. These effects have been
modeled with divisive normalization approaches, including flexible models,
where spatial normalization is recruited only to the degree responses from
center and surround locations are deemed statistically dependent. We propose a
flexible normalization model applied to mid-level representations of deep CNNs
as a tractable way to study contextual normalization mechanisms in mid-level
cortical areas. This approach captures non-trivial spatial dependencies among
mid-level features in CNNs, such as those present in textures and other visual
stimuli, that arise from tiling high order features, geometrically. We expect
that the proposed approach can make predictions about when spatial
normalization might be recruited in mid-level cortical areas. We also expect
this approach to be useful as part of the CNN toolkit, therefore going beyond
more restrictive fixed forms of normalization.
| [
{
"created": "Tue, 5 Jun 2018 17:26:07 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Aug 2018 14:10:35 GMT",
"version": "v2"
},
{
"created": "Mon, 24 Dec 2018 05:29:25 GMT",
"version": "v3"
}
] | 2018-12-27 | [
[
"Giraldo",
"Luis Gonzalo Sanchez",
""
],
[
"Schwartz",
"Odelia",
""
]
] | Deep convolutional neural networks (CNNs) are becoming increasingly popular models to predict neural responses in visual cortex. However, contextual effects, which are prevalent in neural processing and in perception, are not explicitly handled by current CNNs, including those used for neural prediction. In primary visual cortex, neural responses are modulated by stimuli spatially surrounding the classical receptive field in rich ways. These effects have been modeled with divisive normalization approaches, including flexible models, where spatial normalization is recruited only to the degree responses from center and surround locations are deemed statistically dependent. We propose a flexible normalization model applied to mid-level representations of deep CNNs as a tractable way to study contextual normalization mechanisms in mid-level cortical areas. This approach captures non-trivial spatial dependencies among mid-level features in CNNs, such as those present in textures and other visual stimuli, that arise from tiling high order features, geometrically. We expect that the proposed approach can make predictions about when spatial normalization might be recruited in mid-level cortical areas. We also expect this approach to be useful as part of the CNN toolkit, therefore going beyond more restrictive fixed forms of normalization. |
q-bio/0511029 | Reka Albert | Anshuman Gupta, Costas D. Maranas, Reka Albert | Elucidation of Directionality for Co-Expressed Genes: Predicting
Intra-Operon Termination Sites | 7 pages, 8 figures, accepted in Bioinformatics | Bioinformatics 22(2):209-214 (2006) | 10.1093/bioinformatics/bti780 | null | q-bio.MN q-bio.GN | null | We present a novel framework for inferring regulatory and sequence-level
information from gene co-expression networks. The key idea of our methodology
is the systematic integration of network inference and network topological
analysis approaches for uncovering biological insights. We determine the gene
co-expression network of Bacillus subtilis using Affymetrix GeneChip time
series data and show how the inferred network topology can be linked to
sequence-level information hard-wired in the organism's genome. We propose a
systematic way for determining the correlation threshold at which two genes are
assessed to be co-expressed by using the clustering coefficient and we expand
the scope of the gene co-expression network by proposing the slope ratio metric
as a means for incorporating directionality on the edges. We show through
specific examples for B. subtilis that by incorporating expression level
information in addition to the temporal expression patterns, we can uncover
sequence-level biological insights. In particular, we are able to identify a
number of cases where (i) the co-expressed genes are part of a single
transcriptional unit or operon and (ii) the inferred directionality arises due
to the presence of intra-operon transcription termination sites.
| [
{
"created": "Wed, 16 Nov 2005 15:07:11 GMT",
"version": "v1"
}
] | 2007-09-12 | [
[
"Gupta",
"Anshuman",
""
],
[
"Maranas",
"Costas D.",
""
],
[
"Albert",
"Reka",
""
]
] | We present a novel framework for inferring regulatory and sequence-level information from gene co-expression networks. The key idea of our methodology is the systematic integration of network inference and network topological analysis approaches for uncovering biological insights. We determine the gene co-expression network of Bacillus subtilis using Affymetrix GeneChip time series data and show how the inferred network topology can be linked to sequence-level information hard-wired in the organism's genome. We propose a systematic way for determining the correlation threshold at which two genes are assessed to be co-expressed by using the clustering coefficient and we expand the scope of the gene co-expression network by proposing the slope ratio metric as a means for incorporating directionality on the edges. We show through specific examples for B. subtilis that by incorporating expression level information in addition to the temporal expression patterns, we can uncover sequence-level biological insights. In particular, we are able to identify a number of cases where (i) the co-expressed genes are part of a single transcriptional unit or operon and (ii) the inferred directionality arises due to the presence of intra-operon transcription termination sites. |
2401.13219 | Sathyanarayanan Aakur | Sathyanarayanan Aakur, Vishalini R. Laguduva, Priyadharsini
Ramamurthy, Akhilesh Ramachandran | TEPI: Taxonomy-aware Embedding and Pseudo-Imaging for Scarcely-labeled
Zero-shot Genome Classification | Accepted to IEEE JBHI | null | null | null | q-bio.GN cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | A species' genetic code or genome encodes valuable evolutionary, biological,
and phylogenetic information that aids in species recognition, taxonomic
classification, and understanding genetic predispositions like drug resistance
and virulence. However, the vast number of potential species poses significant
challenges in developing a general-purpose whole genome classification tool.
Traditional bioinformatics tools have made notable progress but lack
scalability and are computationally expensive. Machine learning-based
frameworks show promise but must address the issue of large classification
vocabularies with long-tail distributions. In this study, we propose addressing
this problem through zero-shot learning using TEPI, Taxonomy-aware Embedding
and Pseudo-Imaging. We represent each genome as pseudo-images and map them to a
taxonomy-aware embedding space for reasoning and classification. This embedding
space captures compositional and phylogenetic relationships of species,
enabling predictions in extensive search spaces. We evaluate TEPI using two
rigorous zero-shot settings and demonstrate its generalization capabilities
qualitatively on curated, large-scale, publicly sourced data.
| [
{
"created": "Wed, 24 Jan 2024 04:16:28 GMT",
"version": "v1"
}
] | 2024-01-25 | [
[
"Aakur",
"Sathyanarayanan",
""
],
[
"Laguduva",
"Vishalini R.",
""
],
[
"Ramamurthy",
"Priyadharsini",
""
],
[
"Ramachandran",
"Akhilesh",
""
]
] | A species' genetic code or genome encodes valuable evolutionary, biological, and phylogenetic information that aids in species recognition, taxonomic classification, and understanding genetic predispositions like drug resistance and virulence. However, the vast number of potential species poses significant challenges in developing a general-purpose whole genome classification tool. Traditional bioinformatics tools have made notable progress but lack scalability and are computationally expensive. Machine learning-based frameworks show promise but must address the issue of large classification vocabularies with long-tail distributions. In this study, we propose addressing this problem through zero-shot learning using TEPI, Taxonomy-aware Embedding and Pseudo-Imaging. We represent each genome as pseudo-images and map them to a taxonomy-aware embedding space for reasoning and classification. This embedding space captures compositional and phylogenetic relationships of species, enabling predictions in extensive search spaces. We evaluate TEPI using two rigorous zero-shot settings and demonstrate its generalization capabilities qualitatively on curated, large-scale, publicly sourced data. |
1606.05718 | Andrew Beck | Dayong Wang, Aditya Khosla, Rishab Gargeya, Humayun Irshad, and Andrew
H. Beck | Deep Learning for Identifying Metastatic Breast Cancer | null | null | null | null | q-bio.QM cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The International Symposium on Biomedical Imaging (ISBI) held a grand
challenge to evaluate computational systems for the automated detection of
metastatic breast cancer in whole slide images of sentinel lymph node biopsies.
Our team won both competitions in the grand challenge, obtaining an area under
the receiver operating curve (AUC) of 0.925 for the task of whole slide image
classification and a score of 0.7051 for the tumor localization task. A
pathologist independently reviewed the same images, obtaining a whole slide
image classification AUC of 0.966 and a tumor localization score of 0.733.
Combining our deep learning system's predictions with the human pathologist's
diagnoses increased the pathologist's AUC to 0.995, representing an
approximately 85 percent reduction in human error rate. These results
demonstrate the power of using deep learning to produce significant
improvements in the accuracy of pathological diagnoses.
| [
{
"created": "Sat, 18 Jun 2016 04:00:31 GMT",
"version": "v1"
}
] | 2016-06-21 | [
[
"Wang",
"Dayong",
""
],
[
"Khosla",
"Aditya",
""
],
[
"Gargeya",
"Rishab",
""
],
[
"Irshad",
"Humayun",
""
],
[
"Beck",
"Andrew H.",
""
]
] | The International Symposium on Biomedical Imaging (ISBI) held a grand challenge to evaluate computational systems for the automated detection of metastatic breast cancer in whole slide images of sentinel lymph node biopsies. Our team won both competitions in the grand challenge, obtaining an area under the receiver operating curve (AUC) of 0.925 for the task of whole slide image classification and a score of 0.7051 for the tumor localization task. A pathologist independently reviewed the same images, obtaining a whole slide image classification AUC of 0.966 and a tumor localization score of 0.733. Combining our deep learning system's predictions with the human pathologist's diagnoses increased the pathologist's AUC to 0.995, representing an approximately 85 percent reduction in human error rate. These results demonstrate the power of using deep learning to produce significant improvements in the accuracy of pathological diagnoses. |
2212.03456 | Guillaume Lamoureux | Siddharth Bhadra-Lobo and Georgy Derevyanko and Guillaume Lamoureux | Dock2D: Synthetic data for the molecular recognition problem | null | null | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by/4.0/ | Predicting the physical interaction of proteins is a cornerstone problem in
computational biology. New classes of learning-based algorithms are actively
being developed, and are typically trained end-to-end on protein complex
structures extracted from the Protein Data Bank. These training datasets tend
to be large and difficult to use for prototyping and, unlike image or natural
language datasets, they are not easily interpretable by non-experts. We present
Dock2D-IP and Dock2D-IF, two "toy" datasets that can be used to select
algorithms predicting protein-protein interactions$\unicode{x2014}$or any other
type of molecular interactions. Using two-dimensional shapes as input, each
example from Dock2D-IP ("interaction pose") describes the interaction pose of
two shapes known to interact and each example from Dock2D-IF ("interaction
fact") describes whether two shapes form a stable complex or not. We propose a
number of baseline solutions to the problem and show that the same underlying
energy function can be learned either by solving the interaction pose task
(formulated as an energy-minimization "docking" problem) or the
fact-of-interaction task (formulated as a binding free energy estimation
problem).
| [
{
"created": "Wed, 7 Dec 2022 04:46:05 GMT",
"version": "v1"
}
] | 2022-12-08 | [
[
"Bhadra-Lobo",
"Siddharth",
""
],
[
"Derevyanko",
"Georgy",
""
],
[
"Lamoureux",
"Guillaume",
""
]
] | Predicting the physical interaction of proteins is a cornerstone problem in computational biology. New classes of learning-based algorithms are actively being developed, and are typically trained end-to-end on protein complex structures extracted from the Protein Data Bank. These training datasets tend to be large and difficult to use for prototyping and, unlike image or natural language datasets, they are not easily interpretable by non-experts. We present Dock2D-IP and Dock2D-IF, two "toy" datasets that can be used to select algorithms predicting protein-protein interactions$\unicode{x2014}$or any other type of molecular interactions. Using two-dimensional shapes as input, each example from Dock2D-IP ("interaction pose") describes the interaction pose of two shapes known to interact and each example from Dock2D-IF ("interaction fact") describes whether two shapes form a stable complex or not. We propose a number of baseline solutions to the problem and show that the same underlying energy function can be learned either by solving the interaction pose task (formulated as an energy-minimization "docking" problem) or the fact-of-interaction task (formulated as a binding free energy estimation problem). |
q-bio/0606033 | Angel (Anxo) Sanchez | Carlos P. Roca, Jose A. Cuesta y Angel Sanchez | Time Scales in Evolutionary Dynamics | Final version with minor changes, accepted for publication in
Physical Review Letters | null | 10.1103/PhysRevLett.97.158701 | null | q-bio.PE math.DS nlin.AO physics.soc-ph q-bio.QM | null | Evolutionary game theory has traditionally assumed that all individuals in a
population interact with each other between reproduction events. We show that
eliminating this restriction by explicitly considering the time scales of
interaction and selection leads to dramatic changes in the outcome of
evolution. Examples include the selection of the inefficient strategy in the
Harmony and Stag-Hunt games, and the disappearance of the coexistence state in
the Snowdrift game. Our results hold for any population size and in the
presence of a background of fitness.
| [
{
"created": "Fri, 23 Jun 2006 11:08:37 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Sep 2006 08:04:19 GMT",
"version": "v2"
}
] | 2009-11-13 | [
[
"Roca",
"Carlos P.",
""
],
[
"Sanchez",
"Jose A. Cuesta y Angel",
""
]
] | Evolutionary game theory has traditionally assumed that all individuals in a population interact with each other between reproduction events. We show that eliminating this restriction by explicitly considering the time scales of interaction and selection leads to dramatic changes in the outcome of evolution. Examples include the selection of the inefficient strategy in the Harmony and Stag-Hunt games, and the disappearance of the coexistence state in the Snowdrift game. Our results hold for any population size and in the presence of a background of fitness. |
2002.05677 | Lukas Eigentler | Lukas Eigentler | Intraspecific competition in models for vegetation patterns: decrease in
resilience to aridity and facilitation of species coexistence | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Patterned vegetation is a characteristic feature of many dryland ecosystems.
While plant densities on the ecosystem-wide scale are typically low, a spatial
self-organisation principle leads to the occurrence of alternating patches of
high biomass and patches of bare soil. Nevertheless, intraspecific competition
dynamics other than competition for water over long spatial scales are commonly
ignored in mathematical models for vegetation patterns. In this chapter, I
address the impact of local intraspecific competition on a modelling framework
for banded vegetation patterns. Firstly, I show that in the context of a
single-species model, neglecting local intraspecific competition leads to an
overestimation of a patterned ecosystem's resilience to increases in aridity.
Secondly, in the context of a multispecies model, I argue that local
intraspecific competition is a key element in the successful capture of species
coexistence in model solutions representing a vegetation pattern. For both
models, a detailed bifurcation analysis is presented to analyse the onset,
existence and stability of patterns. Besides the strengths of local
intraspecific competition, also the the difference between two species has a
significant impact on the bifurcation structure, providing crucial insights
into the complex ecosystem dynamics. Predictions on future ecosystem dynamics
presented in this chapter, especially on pattern onset and pattern stability,
can aid the development of conservation programs.
| [
{
"created": "Thu, 13 Feb 2020 17:57:04 GMT",
"version": "v1"
},
{
"created": "Sat, 2 May 2020 10:11:08 GMT",
"version": "v2"
}
] | 2020-05-05 | [
[
"Eigentler",
"Lukas",
""
]
] | Patterned vegetation is a characteristic feature of many dryland ecosystems. While plant densities on the ecosystem-wide scale are typically low, a spatial self-organisation principle leads to the occurrence of alternating patches of high biomass and patches of bare soil. Nevertheless, intraspecific competition dynamics other than competition for water over long spatial scales are commonly ignored in mathematical models for vegetation patterns. In this chapter, I address the impact of local intraspecific competition on a modelling framework for banded vegetation patterns. Firstly, I show that in the context of a single-species model, neglecting local intraspecific competition leads to an overestimation of a patterned ecosystem's resilience to increases in aridity. Secondly, in the context of a multispecies model, I argue that local intraspecific competition is a key element in the successful capture of species coexistence in model solutions representing a vegetation pattern. For both models, a detailed bifurcation analysis is presented to analyse the onset, existence and stability of patterns. Besides the strengths of local intraspecific competition, also the the difference between two species has a significant impact on the bifurcation structure, providing crucial insights into the complex ecosystem dynamics. Predictions on future ecosystem dynamics presented in this chapter, especially on pattern onset and pattern stability, can aid the development of conservation programs. |
1402.7215 | Jozsef Farkas | J\'ozsef Z. Farkas, A. Yu. Morozov | Modelling effects of rapid evolution on persistence and stability in
structured predator-prey systems | 28 pages, 1 figure, to appear in MMNP | Math. Model. Nat. Phenom. 9 (2014) 26-46 | 10.1051/mmnp/20149303 | null | q-bio.PE math.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we explore the eco-evolutionary dynamics of a predator-prey
model, where the prey population is structured according to a certain life
history trait. The trait distribution within the prey population is the result
of interplay between genetic inheritance and mutation, as well as selectivity
in the consumption of prey by the predator. The evolutionary processes are
considered to take place on the same time scale as ecological dynamics, i.e. we
consider the evolution to be rapid. Previously published results show that
population structuring and rapid evolution in such predator-prey system can
stabilise an otherwise globally unstable dynamics even with an unlimited
carrying capacity of prey. However, those findings were only based on direct
numerical simulation of equations and obtained for particular parametrisations
of model functions, which obviously calls into question the correctness and
generality of the previous results. The main objective of the current study is
to treat the model analytically and consider various parametrisations of
predator selectivity and inheritance kernel. We investigate the existence of a
coexistence stationary state in the model and carry out stability analysis of
this state. We derive expressions for the Hopf bifurcation curve which can be
used for constructing bifurcation diagrams in the parameter space without the
need for a direct numerical simulation of the underlying integro-differential
equations. We analytically show the possibility of stabilisation of a globally
unstable predator-prey system with prey structuring. We prove that the
coexistence stationary state is stable when the saturation in the predation
term is low. Finally, for a class of kernels describing genetic inheritance and
mutation we show that stability of the predator-prey interaction will require a
selectivity of predation according to the life trait.
| [
{
"created": "Fri, 28 Feb 2014 11:57:54 GMT",
"version": "v1"
}
] | 2019-03-27 | [
[
"Farkas",
"József Z.",
""
],
[
"Morozov",
"A. Yu.",
""
]
] | In this paper we explore the eco-evolutionary dynamics of a predator-prey model, where the prey population is structured according to a certain life history trait. The trait distribution within the prey population is the result of interplay between genetic inheritance and mutation, as well as selectivity in the consumption of prey by the predator. The evolutionary processes are considered to take place on the same time scale as ecological dynamics, i.e. we consider the evolution to be rapid. Previously published results show that population structuring and rapid evolution in such predator-prey system can stabilise an otherwise globally unstable dynamics even with an unlimited carrying capacity of prey. However, those findings were only based on direct numerical simulation of equations and obtained for particular parametrisations of model functions, which obviously calls into question the correctness and generality of the previous results. The main objective of the current study is to treat the model analytically and consider various parametrisations of predator selectivity and inheritance kernel. We investigate the existence of a coexistence stationary state in the model and carry out stability analysis of this state. We derive expressions for the Hopf bifurcation curve which can be used for constructing bifurcation diagrams in the parameter space without the need for a direct numerical simulation of the underlying integro-differential equations. We analytically show the possibility of stabilisation of a globally unstable predator-prey system with prey structuring. We prove that the coexistence stationary state is stable when the saturation in the predation term is low. Finally, for a class of kernels describing genetic inheritance and mutation we show that stability of the predator-prey interaction will require a selectivity of predation according to the life trait. |
q-bio/0701043 | Kunal K. Das | Robert Rovetti, Kunal K. Das, Alan Garfinkel, and Yohannes Shiferaw | Macroscopic consequences of calcium signaling in microdomains: A first
passage time approach | 4 pages, 4 figures | Phys. Rev. E 76, 051920 (2007) | 10.1103/PhysRevE.76.051920 | null | q-bio.SC cond-mat.stat-mech physics.bio-ph | null | Calcium (Ca) plays an important role in regulating various cellular
processes. In a variety of cell types, Ca signaling occurs within microdomains
where channels deliver localized pulses of Ca which activate a nearby
collection of Ca-sensitive receptors. The small number of channels involved
ensures that the signaling process is stochastic. The aggregate response of
several thousand of these microdomains yields a whole-cell response which
dictates the cell behavior. Here, we study analytically the statistical
properties of a population of these microdomains in response to a trigger
signal. We apply these results to understand the relationship between Ca influx
and Ca release in cardiac cells. In this context, we use a first passage time
approach to show analytically how Ca release in the whole cell depends on the
single channel kinetics of Ca channels and the properties of microdomains.
Using these results, we explain the underlying mechanism for the graded
relationship between Ca influx and Ca release in cardiac cells.
| [
{
"created": "Fri, 26 Jan 2007 21:48:14 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Rovetti",
"Robert",
""
],
[
"Das",
"Kunal K.",
""
],
[
"Garfinkel",
"Alan",
""
],
[
"Shiferaw",
"Yohannes",
""
]
] | Calcium (Ca) plays an important role in regulating various cellular processes. In a variety of cell types, Ca signaling occurs within microdomains where channels deliver localized pulses of Ca which activate a nearby collection of Ca-sensitive receptors. The small number of channels involved ensures that the signaling process is stochastic. The aggregate response of several thousand of these microdomains yields a whole-cell response which dictates the cell behavior. Here, we study analytically the statistical properties of a population of these microdomains in response to a trigger signal. We apply these results to understand the relationship between Ca influx and Ca release in cardiac cells. In this context, we use a first passage time approach to show analytically how Ca release in the whole cell depends on the single channel kinetics of Ca channels and the properties of microdomains. Using these results, we explain the underlying mechanism for the graded relationship between Ca influx and Ca release in cardiac cells. |
2004.07782 | Mostafa Karimi | Mostafa Karimi, Arman Hasanzadeh and Yang shen | Network-principled deep generative models for designing drug
combinations as graph sets | null | null | null | null | q-bio.MN cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Combination therapy has shown to improve therapeutic efficacy while reducing
side effects. Importantly, it has become an indispensable strategy to overcome
resistance in antibiotics, anti-microbials, and anti-cancer drugs. Facing
enormous chemical space and unclear design principles for small-molecule
combinations, the computational drug-combination design has not seen generative
models to meet its potential to accelerate resistance-overcoming drug
combination discovery. We have developed the first deep generative model for
drug combination design, by jointly embedding graph-structured domain knowledge
and iteratively training a reinforcement learning-based chemical graph-set
designer. First, we have developed Hierarchical Variational Graph Auto-Encoders
(HVGAE) trained end-to-end to jointly embed gene-gene, gene-disease, and
disease-disease networks. Novel attentional pooling is introduced here for
learning disease-representations from associated genes' representations.
Second, targeting diseases in learned representations, we have recast the
drug-combination design problem as graph-set generation and developed a deep
learning-based model with novel rewards. Specifically, besides chemical
validity rewards, we have introduced a novel generative adversarial award,
being generalized sliced Wasserstein, for chemically diverse molecules with
distributions similar to known drugs. We have also designed a network
principle-based reward for drug combinations. Numerical results indicate that,
compared to graph embedding methods, HVGAE learns more informative and
generalizable disease representations. Case studies on four diseases show that
network-principled drug combinations tend to have low toxicity. The generated
drug combinations collectively cover the disease module similar to FDA-approved
drug combinations and could potentially suggest novel systems-pharmacology
strategies.
| [
{
"created": "Thu, 16 Apr 2020 17:22:39 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Apr 2020 22:38:15 GMT",
"version": "v2"
}
] | 2020-04-24 | [
[
"Karimi",
"Mostafa",
""
],
[
"Hasanzadeh",
"Arman",
""
],
[
"shen",
"Yang",
""
]
] | Combination therapy has shown to improve therapeutic efficacy while reducing side effects. Importantly, it has become an indispensable strategy to overcome resistance in antibiotics, anti-microbials, and anti-cancer drugs. Facing enormous chemical space and unclear design principles for small-molecule combinations, the computational drug-combination design has not seen generative models to meet its potential to accelerate resistance-overcoming drug combination discovery. We have developed the first deep generative model for drug combination design, by jointly embedding graph-structured domain knowledge and iteratively training a reinforcement learning-based chemical graph-set designer. First, we have developed Hierarchical Variational Graph Auto-Encoders (HVGAE) trained end-to-end to jointly embed gene-gene, gene-disease, and disease-disease networks. Novel attentional pooling is introduced here for learning disease-representations from associated genes' representations. Second, targeting diseases in learned representations, we have recast the drug-combination design problem as graph-set generation and developed a deep learning-based model with novel rewards. Specifically, besides chemical validity rewards, we have introduced a novel generative adversarial award, being generalized sliced Wasserstein, for chemically diverse molecules with distributions similar to known drugs. We have also designed a network principle-based reward for drug combinations. Numerical results indicate that, compared to graph embedding methods, HVGAE learns more informative and generalizable disease representations. Case studies on four diseases show that network-principled drug combinations tend to have low toxicity. The generated drug combinations collectively cover the disease module similar to FDA-approved drug combinations and could potentially suggest novel systems-pharmacology strategies. |
2407.04202 | Yu-Tai Ching | Yu-Tai Ching (1), Chin-Ping Cho (2), Fu-Kai Tang (1), Yi-Chiun Chang
(1), Chang-Chieh Cheng (3), Guan-Wei He (4), Ann-Shyn Chang (5), Chaochun
Chuang (6) ((1) Department of Computer Science, National Yang Ming Chiao Tung
University, Taiwan, (2) Google Taiwan Engineering Limited, Taiwan, (3)
Information Technology Service Center, National Yang Ming Chiao Tung
University, Taiwan, (4) Phison Electronics Corp., Taiwan, (5) Brain Research
Center, National Tsing Hua University, Taiwan, (6) National Center for
High-performance Computing, Taiwan) | Reverse Engineering the Fly Brain Using FlyCircuit Database | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A method to reverse engineering of a fly brain using the {\it FlyCircuit}
database is presented. This method was designed based on the assumption that
similar neurons could serve identical functions. We thus cluster the neurons
based on the similarity between neurons. The procedures are to partition the
neurons in the database into groups, and then assemble the groups into
potential modules. Some of the modules correspond to known neuropils, including
Medulla were obtained. The same clustering algorithm was applied to analyze
Medulla's structure. Another possible application of the clustering result is
to study the brain-wide neuron connectome by looking at the connectivity
between groups of neurons.
| [
{
"created": "Fri, 5 Jul 2024 01:00:59 GMT",
"version": "v1"
}
] | 2024-07-08 | [
[
"Ching",
"Yu-Tai",
""
],
[
"Cho",
"Chin-Ping",
""
],
[
"Tang",
"Fu-Kai",
""
],
[
"Chang",
"Yi-Chiun",
""
],
[
"Cheng",
"Chang-Chieh",
""
],
[
"He",
"Guan-Wei",
""
],
[
"Chang",
"Ann-Shyn",
""
],
[
"... | A method to reverse engineering of a fly brain using the {\it FlyCircuit} database is presented. This method was designed based on the assumption that similar neurons could serve identical functions. We thus cluster the neurons based on the similarity between neurons. The procedures are to partition the neurons in the database into groups, and then assemble the groups into potential modules. Some of the modules correspond to known neuropils, including Medulla were obtained. The same clustering algorithm was applied to analyze Medulla's structure. Another possible application of the clustering result is to study the brain-wide neuron connectome by looking at the connectivity between groups of neurons. |
1512.00949 | Momiao Xiong | Futao Zhang, Dan Xie, Meimei Liang, Momiao Xiong | Multivariate Functional Regression Models for Epistasis Analysis | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To date, most genetic analyses of phenotypes have focused on analyzing single
traits or, analyzing each phenotype independently. However, joint epistasis
analysis of multiple complementary traits will increase statistical power, and
hold the key to understanding the complicated genetic structure of the complex
diseases. Despite their importance in uncovering the genetic structure of
complex traits, the statistical methods for identifying epistasis in multiple
phenotypes remains fundamentally unexplored. To fill this gap, we formulate a
test for interaction between two gens in multiple quantitative trait analysis
as a multiple functional regression (MFRG) in which the genotype functions
(genetic variant profiles) are defined as a function of the genomic position of
the genetic variants. We use large scale simulations to calculate its type I
error rates for testing interaction between two genes with multiple phenotypes
and to compare its power with multivariate pair-wise interaction analysis and
single trait interaction analysis by a single variate functional regression
model. To further evaluate its performance, the MFRG for epistasis analysis is
applied to five phenotypes and exome sequence data from the NHLBI Exome
Sequencing Project (ESP) to detect pleiotropic epistasis. A total of 136 pairs
of genes that formed a genetic interaction network showed significant evidence
of epistasis influencing five traits. The results demonstrate that the joint
interaction analysis of multiple phenotypes has much higher power to detect
interaction than the interaction analysis of single trait and may open a new
direction to fully uncovering the genetic structure of multiple phenotypes.
| [
{
"created": "Thu, 3 Dec 2015 04:58:56 GMT",
"version": "v1"
}
] | 2015-12-04 | [
[
"Zhang",
"Futao",
""
],
[
"Xie",
"Dan",
""
],
[
"Liang",
"Meimei",
""
],
[
"Xiong",
"Momiao",
""
]
] | To date, most genetic analyses of phenotypes have focused on analyzing single traits or, analyzing each phenotype independently. However, joint epistasis analysis of multiple complementary traits will increase statistical power, and hold the key to understanding the complicated genetic structure of the complex diseases. Despite their importance in uncovering the genetic structure of complex traits, the statistical methods for identifying epistasis in multiple phenotypes remains fundamentally unexplored. To fill this gap, we formulate a test for interaction between two gens in multiple quantitative trait analysis as a multiple functional regression (MFRG) in which the genotype functions (genetic variant profiles) are defined as a function of the genomic position of the genetic variants. We use large scale simulations to calculate its type I error rates for testing interaction between two genes with multiple phenotypes and to compare its power with multivariate pair-wise interaction analysis and single trait interaction analysis by a single variate functional regression model. To further evaluate its performance, the MFRG for epistasis analysis is applied to five phenotypes and exome sequence data from the NHLBI Exome Sequencing Project (ESP) to detect pleiotropic epistasis. A total of 136 pairs of genes that formed a genetic interaction network showed significant evidence of epistasis influencing five traits. The results demonstrate that the joint interaction analysis of multiple phenotypes has much higher power to detect interaction than the interaction analysis of single trait and may open a new direction to fully uncovering the genetic structure of multiple phenotypes. |
2210.09511 | Lisa Maria Kreusser | David Alonso, Steffen Bauer, Markus Kirkilionis, Lisa Maria Kreusser,
Luca Sbano | Generalised Gillespie Algorithms for Simulations in a Rule-Based
Epidemiological Model Framework | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Rule-based models have been successfully used to represent different aspects
of the COVID-19 pandemic, including age, testing, hospitalisation, lockdowns,
immunity, infectivity, behaviour, mobility and vaccination of individuals.
These rule-based approaches are motivated by chemical reaction rules which are
traditionally solved numerically with the standard Gillespie algorithm proposed
in the context of molecular dynamics. When applying reaction system type of
approaches to epidemiology, generalisations of the Gillespie algorithm are
required due to the time-dependency of the problems. In this article, we
present different generalisations of the standard Gillespie algorithm which
address discrete subtypes (e.g., incorporating the age structure of the
population), time-discrete updates (e.g., incorporating daily imposed change of
rates for lockdowns) and deterministic delays (e.g., given waiting time until a
specific change in types such as release from isolation occurs). These
algorithms are complemented by relevant examples in the context of the COVID-19
pandemic and numerical results.
| [
{
"created": "Mon, 17 Oct 2022 11:44:26 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Oct 2022 17:01:09 GMT",
"version": "v2"
}
] | 2022-10-25 | [
[
"Alonso",
"David",
""
],
[
"Bauer",
"Steffen",
""
],
[
"Kirkilionis",
"Markus",
""
],
[
"Kreusser",
"Lisa Maria",
""
],
[
"Sbano",
"Luca",
""
]
] | Rule-based models have been successfully used to represent different aspects of the COVID-19 pandemic, including age, testing, hospitalisation, lockdowns, immunity, infectivity, behaviour, mobility and vaccination of individuals. These rule-based approaches are motivated by chemical reaction rules which are traditionally solved numerically with the standard Gillespie algorithm proposed in the context of molecular dynamics. When applying reaction system type of approaches to epidemiology, generalisations of the Gillespie algorithm are required due to the time-dependency of the problems. In this article, we present different generalisations of the standard Gillespie algorithm which address discrete subtypes (e.g., incorporating the age structure of the population), time-discrete updates (e.g., incorporating daily imposed change of rates for lockdowns) and deterministic delays (e.g., given waiting time until a specific change in types such as release from isolation occurs). These algorithms are complemented by relevant examples in the context of the COVID-19 pandemic and numerical results. |
2311.02124 | Yuyan Ni | Yuyan Ni, Shikun Feng, Wei-Ying Ma, Zhi-Ming Ma, Yanyan Lan | Sliced Denoising: A Physics-Informed Molecular Pre-Training Method | null | null | null | null | q-bio.BM cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | While molecular pre-training has shown great potential in enhancing drug
discovery, the lack of a solid physical interpretation in current methods
raises concerns about whether the learned representation truly captures the
underlying explanatory factors in observed data, ultimately resulting in
limited generalization and robustness. Although denoising methods offer a
physical interpretation, their accuracy is often compromised by ad-hoc noise
design, leading to inaccurate learned force fields. To address this limitation,
this paper proposes a new method for molecular pre-training, called sliced
denoising (SliDe), which is based on the classical mechanical intramolecular
potential theory. SliDe utilizes a novel noise strategy that perturbs bond
lengths, angles, and torsion angles to achieve better sampling over
conformations. Additionally, it introduces a random slicing approach that
circumvents the computationally expensive calculation of the Jacobian matrix,
which is otherwise essential for estimating the force field. By aligning with
physical principles, SliDe shows a 42\% improvement in the accuracy of
estimated force fields compared to current state-of-the-art denoising methods,
and thus outperforms traditional baselines on various molecular property
prediction tasks.
| [
{
"created": "Fri, 3 Nov 2023 07:58:05 GMT",
"version": "v1"
}
] | 2023-11-07 | [
[
"Ni",
"Yuyan",
""
],
[
"Feng",
"Shikun",
""
],
[
"Ma",
"Wei-Ying",
""
],
[
"Ma",
"Zhi-Ming",
""
],
[
"Lan",
"Yanyan",
""
]
] | While molecular pre-training has shown great potential in enhancing drug discovery, the lack of a solid physical interpretation in current methods raises concerns about whether the learned representation truly captures the underlying explanatory factors in observed data, ultimately resulting in limited generalization and robustness. Although denoising methods offer a physical interpretation, their accuracy is often compromised by ad-hoc noise design, leading to inaccurate learned force fields. To address this limitation, this paper proposes a new method for molecular pre-training, called sliced denoising (SliDe), which is based on the classical mechanical intramolecular potential theory. SliDe utilizes a novel noise strategy that perturbs bond lengths, angles, and torsion angles to achieve better sampling over conformations. Additionally, it introduces a random slicing approach that circumvents the computationally expensive calculation of the Jacobian matrix, which is otherwise essential for estimating the force field. By aligning with physical principles, SliDe shows a 42\% improvement in the accuracy of estimated force fields compared to current state-of-the-art denoising methods, and thus outperforms traditional baselines on various molecular property prediction tasks. |
2005.13653 | Guo-Wei Wei | Duc D Nguyen, Kaifu Gao, Jiahui Chen, Rui Wang and Guo-Wei Wei | Unveiling the molecular mechanism of SARS-CoV-2 main protease inhibition
from 92 crystal structures | 17 pages, 8 figures, 3 tables | null | null | null | q-bio.BM q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Currently, there is no effective antiviral drugs nor vaccine for coronavirus
disease 2019 (COVID-19) caused by acute respiratory syndrome coronavirus 2
(SARS-CoV-2). Due to its high conservativeness and low similarity with human
genes, SARS-CoV-2 main protease (M$^{\text{pro}}$) is one of the most favorable
drug targets. However, the current understanding of the molecular mechanism of
M$^{\text{pro}}$ inhibition is limited by the lack of reliable binding affinity
ranking and prediction of existing structures of M$^{\text{pro}}$-inhibitor
complexes. This work integrates mathematics and deep learning (MathDL) to
provide a reliable ranking of the binding affinities of 92 SARS-CoV-2
M$^{\text{pro}}$ inhibitor structures. We reveal that Gly143 residue in
M$^{\text{pro}}$ is the most attractive site to form hydrogen bonds, followed
by Cys145, Glu166, and His163. We also identify 45 targeted covalent bonding
inhibitors. Validation on the PDBbind v2016 core set benchmark shows the MathDL
has achieved the top performance with Pearson's correlation coefficient ($R_p$)
being 0.858. Most importantly, MathDL is validated on a carefully curated
SARS-CoV-2 inhibitor dataset with the averaged $R_p$ as high as 0.751, which
endows the reliability of the present binding affinity prediction. The present
binding affinity ranking, interaction analysis, and fragment decomposition
offer a foundation for future drug discovery efforts.
| [
{
"created": "Wed, 27 May 2020 21:04:46 GMT",
"version": "v1"
}
] | 2020-05-29 | [
[
"Nguyen",
"Duc D",
""
],
[
"Gao",
"Kaifu",
""
],
[
"Chen",
"Jiahui",
""
],
[
"Wang",
"Rui",
""
],
[
"Wei",
"Guo-Wei",
""
]
] | Currently, there is no effective antiviral drugs nor vaccine for coronavirus disease 2019 (COVID-19) caused by acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Due to its high conservativeness and low similarity with human genes, SARS-CoV-2 main protease (M$^{\text{pro}}$) is one of the most favorable drug targets. However, the current understanding of the molecular mechanism of M$^{\text{pro}}$ inhibition is limited by the lack of reliable binding affinity ranking and prediction of existing structures of M$^{\text{pro}}$-inhibitor complexes. This work integrates mathematics and deep learning (MathDL) to provide a reliable ranking of the binding affinities of 92 SARS-CoV-2 M$^{\text{pro}}$ inhibitor structures. We reveal that Gly143 residue in M$^{\text{pro}}$ is the most attractive site to form hydrogen bonds, followed by Cys145, Glu166, and His163. We also identify 45 targeted covalent bonding inhibitors. Validation on the PDBbind v2016 core set benchmark shows the MathDL has achieved the top performance with Pearson's correlation coefficient ($R_p$) being 0.858. Most importantly, MathDL is validated on a carefully curated SARS-CoV-2 inhibitor dataset with the averaged $R_p$ as high as 0.751, which endows the reliability of the present binding affinity prediction. The present binding affinity ranking, interaction analysis, and fragment decomposition offer a foundation for future drug discovery efforts. |
1701.04346 | Erdem Pulcu | Erdem Pulcu | Evolution of value-based decision-making preferences in the population | 34 pages | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | We are living in an uncertain and dynamically changing world, where optimal
decision-making under uncertainty is directly linked to the survival of
species. However, evolutionary selection pressures that shape value-based
decision-making under uncertainty have thus far received limited attention.
Here, we demonstrate that fitness associated with different value-based
decision-making preferences is influenced by the value properties of the
environment, as well as the characteristics and the density of competitors in
the population. We show that risk-seeking tendencies will eventually dominate
the population, when there are a relatively large number of discrete strategies
competing in volatile value environments. These results may have important
implications for behavioural ecology: (i) to inform the prediction that species
which naturally exhibit risk-averse characteristics and live alongside
risk-seeking competitors may be selected against; (ii) to potentially improve
our understanding of day-traders value-based decision-making preferences in
volatile financial markets in terms of an environmental adaptation.
| [
{
"created": "Mon, 16 Jan 2017 16:20:17 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Jan 2018 05:38:54 GMT",
"version": "v2"
},
{
"created": "Tue, 3 Apr 2018 17:22:42 GMT",
"version": "v3"
}
] | 2018-04-04 | [
[
"Pulcu",
"Erdem",
""
]
] | We are living in an uncertain and dynamically changing world, where optimal decision-making under uncertainty is directly linked to the survival of species. However, evolutionary selection pressures that shape value-based decision-making under uncertainty have thus far received limited attention. Here, we demonstrate that fitness associated with different value-based decision-making preferences is influenced by the value properties of the environment, as well as the characteristics and the density of competitors in the population. We show that risk-seeking tendencies will eventually dominate the population, when there are a relatively large number of discrete strategies competing in volatile value environments. These results may have important implications for behavioural ecology: (i) to inform the prediction that species which naturally exhibit risk-averse characteristics and live alongside risk-seeking competitors may be selected against; (ii) to potentially improve our understanding of day-traders value-based decision-making preferences in volatile financial markets in terms of an environmental adaptation. |
q-bio/0703003 | Roderick Melnik | Jack Yang and Roderick V.N. Melnik | Effect of Internal Viscosity on Brownian Dynamics of DNA Molecules in
Shear Flow | Keywords: effect of internal viscosity, dumbbell model, Brownian
dynamics, DNA molecules in shear flow | Effect of internal viscosity on Brownian dynamics of DNA molecules
in shear flow, Yang, X.D. and Melnik, R.V.N., Computational Biology and
Chemistry, 31 (2), 110-114, 2007 | null | null | q-bio.BM | null | The results of Brownian dynamics simulations of a single DNA molecule in
shear flow are presented taking into account the effect of internal viscosity.
The dissipative mechanism of internal viscosity is proved necessary in the
research of DNA dynamics. A stochastic model is derived on the basis of the
balance equation for forces acting on the chain. The Euler method is applied to
the solution of the model. The extensions of DNA molecules for different
Weissenberg numbers are analyzed. Comparison with the experimental results
available in the literature is carried out to estimate the contribution of the
effect of internal viscosity.
| [
{
"created": "Thu, 1 Mar 2007 18:01:10 GMT",
"version": "v1"
}
] | 2010-04-14 | [
[
"Yang",
"Jack",
""
],
[
"Melnik",
"Roderick V. N.",
""
]
] | The results of Brownian dynamics simulations of a single DNA molecule in shear flow are presented taking into account the effect of internal viscosity. The dissipative mechanism of internal viscosity is proved necessary in the research of DNA dynamics. A stochastic model is derived on the basis of the balance equation for forces acting on the chain. The Euler method is applied to the solution of the model. The extensions of DNA molecules for different Weissenberg numbers are analyzed. Comparison with the experimental results available in the literature is carried out to estimate the contribution of the effect of internal viscosity. |
2405.00753 | Li Wang | Li Wang, Yiping Li, Xiangzheng Fu, Xiucai Ye, Junfeng Shi, Gary G.
Yen, Xiangxiang Zeng | HMAMP: Hypervolume-Driven Multi-Objective Antimicrobial Peptides Design | null | null | null | null | q-bio.QM cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Antimicrobial peptides (AMPs) have exhibited unprecedented potential as
biomaterials in combating multidrug-resistant bacteria. Despite the increasing
adoption of artificial intelligence for novel AMP design, challenges pertaining
to conflicting attributes such as activity, hemolysis, and toxicity have
significantly impeded the progress of researchers. This paper introduces a
paradigm shift by considering multiple attributes in AMP design.
Presented herein is a novel approach termed Hypervolume-driven
Multi-objective Antimicrobial Peptide Design (HMAMP), which prioritizes the
simultaneous optimization of multiple attributes of AMPs. By synergizing
reinforcement learning and a gradient descent algorithm rooted in the
hypervolume maximization concept, HMAMP effectively expands exploration space
and mitigates the issue of pattern collapse. This method generates a wide array
of prospective AMP candidates that strike a balance among diverse attributes.
Furthermore, we pinpoint knee points along the Pareto front of these candidate
AMPs. Empirical results across five benchmark models substantiate that
HMAMP-designed AMPs exhibit competitive performance and heightened diversity. A
detailed analysis of the helical structures and molecular dynamics simulations
for ten potential candidate AMPs validates the superiority of HMAMP in the
realm of multi-objective AMP design. The ability of HMAMP to systematically
craft AMPs considering multiple attributes marks a pioneering milestone,
establishing a universal computational framework for the multi-objective design
of AMPs.
| [
{
"created": "Wed, 1 May 2024 07:17:59 GMT",
"version": "v1"
}
] | 2024-05-03 | [
[
"Wang",
"Li",
""
],
[
"Li",
"Yiping",
""
],
[
"Fu",
"Xiangzheng",
""
],
[
"Ye",
"Xiucai",
""
],
[
"Shi",
"Junfeng",
""
],
[
"Yen",
"Gary G.",
""
],
[
"Zeng",
"Xiangxiang",
""
]
] | Antimicrobial peptides (AMPs) have exhibited unprecedented potential as biomaterials in combating multidrug-resistant bacteria. Despite the increasing adoption of artificial intelligence for novel AMP design, challenges pertaining to conflicting attributes such as activity, hemolysis, and toxicity have significantly impeded the progress of researchers. This paper introduces a paradigm shift by considering multiple attributes in AMP design. Presented herein is a novel approach termed Hypervolume-driven Multi-objective Antimicrobial Peptide Design (HMAMP), which prioritizes the simultaneous optimization of multiple attributes of AMPs. By synergizing reinforcement learning and a gradient descent algorithm rooted in the hypervolume maximization concept, HMAMP effectively expands exploration space and mitigates the issue of pattern collapse. This method generates a wide array of prospective AMP candidates that strike a balance among diverse attributes. Furthermore, we pinpoint knee points along the Pareto front of these candidate AMPs. Empirical results across five benchmark models substantiate that HMAMP-designed AMPs exhibit competitive performance and heightened diversity. A detailed analysis of the helical structures and molecular dynamics simulations for ten potential candidate AMPs validates the superiority of HMAMP in the realm of multi-objective AMP design. The ability of HMAMP to systematically craft AMPs considering multiple attributes marks a pioneering milestone, establishing a universal computational framework for the multi-objective design of AMPs. |
1403.0858 | Yun S. Song | Kelley Harris, Sara Sheehan, John A. Kamm, and Yun S. Song | Decoding coalescent hidden Markov models in linear time | 18 pages, 5 figures. To appear in the Proceedings of the 18th Annual
International Conference on Research in Computational Molecular Biology
(RECOMB 2014). The final publication is available at link.springer.com | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many areas of computational biology, hidden Markov models (HMMs) have been
used to model local genomic features. In particular, coalescent HMMs have been
used to infer ancient population sizes, migration rates, divergence times, and
other parameters such as mutation and recombination rates. As more loci,
sequences, and hidden states are added to the model, however, the runtime of
coalescent HMMs can quickly become prohibitive. Here we present a new algorithm
for reducing the runtime of coalescent HMMs from quadratic in the number of
hidden time states to linear, without making any additional approximations. Our
algorithm can be incorporated into various coalescent HMMs, including the
popular method PSMC for inferring variable effective population sizes. Here we
implement this algorithm to speed up our demographic inference method diCal,
which is equivalent to PSMC when applied to a sample of two haplotypes. We
demonstrate that the linear-time method can reconstruct a population size
change history more accurately than the quadratic-time method, given similar
computation resources. We also apply the method to data from the 1000 Genomes
project, inferring a high-resolution history of size changes in the European
population.
| [
{
"created": "Tue, 4 Mar 2014 17:01:11 GMT",
"version": "v1"
}
] | 2014-03-05 | [
[
"Harris",
"Kelley",
""
],
[
"Sheehan",
"Sara",
""
],
[
"Kamm",
"John A.",
""
],
[
"Song",
"Yun S.",
""
]
] | In many areas of computational biology, hidden Markov models (HMMs) have been used to model local genomic features. In particular, coalescent HMMs have been used to infer ancient population sizes, migration rates, divergence times, and other parameters such as mutation and recombination rates. As more loci, sequences, and hidden states are added to the model, however, the runtime of coalescent HMMs can quickly become prohibitive. Here we present a new algorithm for reducing the runtime of coalescent HMMs from quadratic in the number of hidden time states to linear, without making any additional approximations. Our algorithm can be incorporated into various coalescent HMMs, including the popular method PSMC for inferring variable effective population sizes. Here we implement this algorithm to speed up our demographic inference method diCal, which is equivalent to PSMC when applied to a sample of two haplotypes. We demonstrate that the linear-time method can reconstruct a population size change history more accurately than the quadratic-time method, given similar computation resources. We also apply the method to data from the 1000 Genomes project, inferring a high-resolution history of size changes in the European population. |
2003.05681 | Ke Wu | Ke Wu, Didier Darcet, Qian Wang, Didier Sornette | Generalized logistic growth modeling of the COVID-19 outbreak: comparing
the dynamics in the 29 provinces in China and in the rest of the world | null | Nonlinear Dynamics, 2020 | 10.1007/s11071-020-05862-6 | null | q-bio.PE physics.bio-ph stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Started in Wuhan, China, the COVID-19 has been spreading all over the world.
We calibrate the logistic growth model, the generalized logistic growth model,
the generalized Richards model and the generalized growth model to the reported
number of infected cases for the whole of China, 29 provinces in China, and 33
countries and regions that have been or are undergoing major outbreaks. We
dissect the development of the epidemics in China and the impact of the drastic
control measures both at the aggregate level and within each province. We
quantitatively document four phases of the outbreak in China with a detailed
analysis on the heterogeneous situations across provinces. The extreme
containment measures implemented by China were very effective with some
instructive variations across provinces. Borrowing from the experience of
China, we made scenario projections on the development of the outbreak in other
countries. We identified that outbreaks in 14 countries (mostly in western
Europe) have ended, while resurgences of cases have been identified in several
among them. The modeling results clearly show longer after-peak trajectories in
western countries, in contrast to most provinces in China where the after-peak
trajectory is characterized by a much faster decay. We identified three groups
of countries in different level of outbreak progress, and provide informative
implications for the current global pandemic.
| [
{
"created": "Thu, 12 Mar 2020 09:45:27 GMT",
"version": "v1"
},
{
"created": "Sat, 9 May 2020 14:12:49 GMT",
"version": "v2"
},
{
"created": "Wed, 23 Sep 2020 03:43:08 GMT",
"version": "v3"
}
] | 2020-09-24 | [
[
"Wu",
"Ke",
""
],
[
"Darcet",
"Didier",
""
],
[
"Wang",
"Qian",
""
],
[
"Sornette",
"Didier",
""
]
] | Started in Wuhan, China, the COVID-19 has been spreading all over the world. We calibrate the logistic growth model, the generalized logistic growth model, the generalized Richards model and the generalized growth model to the reported number of infected cases for the whole of China, 29 provinces in China, and 33 countries and regions that have been or are undergoing major outbreaks. We dissect the development of the epidemics in China and the impact of the drastic control measures both at the aggregate level and within each province. We quantitatively document four phases of the outbreak in China with a detailed analysis on the heterogeneous situations across provinces. The extreme containment measures implemented by China were very effective with some instructive variations across provinces. Borrowing from the experience of China, we made scenario projections on the development of the outbreak in other countries. We identified that outbreaks in 14 countries (mostly in western Europe) have ended, while resurgences of cases have been identified in several among them. The modeling results clearly show longer after-peak trajectories in western countries, in contrast to most provinces in China where the after-peak trajectory is characterized by a much faster decay. We identified three groups of countries in different level of outbreak progress, and provide informative implications for the current global pandemic. |
1204.3456 | Bela M. Mulder | Bela M. Mulder | Microtubules Interacting with a Boundary: Mean Length and Mean
First-Passage Times | null | null | 10.1103/PhysRevE.86.011902 | null | q-bio.SC math-ph math.MP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We formulate a dynamical model for microtubules interacting with a
catastrophe-inducing boundary. In this model microtubules are either waiting to
be nucleated, actively growing or shrinking, or stalled at the boundary. We
first determine the steady-state occupation of these various states and the
resultant length distribution. Next, we formulate the problem of the Mean
First-Passage Time to reach the boundary in terms of an appropriate set of
splitting probabilities and conditional Mean First-Passage Times, and solve
explicitly for these quantities using a differential equation approach. As an
application, we revisit a recently proposed search-and-capture model for the
interaction between microtubules and target chromosomes [Gopalakrishnan &
Govindan, Bull. Math. Biol. 73:2483--506 (2011)]. We show how our approach
leads to a direct and compact solution of this problem.
| [
{
"created": "Mon, 16 Apr 2012 12:01:48 GMT",
"version": "v1"
}
] | 2013-05-30 | [
[
"Mulder",
"Bela M.",
""
]
] | We formulate a dynamical model for microtubules interacting with a catastrophe-inducing boundary. In this model microtubules are either waiting to be nucleated, actively growing or shrinking, or stalled at the boundary. We first determine the steady-state occupation of these various states and the resultant length distribution. Next, we formulate the problem of the Mean First-Passage Time to reach the boundary in terms of an appropriate set of splitting probabilities and conditional Mean First-Passage Times, and solve explicitly for these quantities using a differential equation approach. As an application, we revisit a recently proposed search-and-capture model for the interaction between microtubules and target chromosomes [Gopalakrishnan & Govindan, Bull. Math. Biol. 73:2483--506 (2011)]. We show how our approach leads to a direct and compact solution of this problem. |
1805.07298 | Konstantin Blyuss | F. Fatehi, S.N. Kyrychko, A. Ross, Y.N. Kyrychko, K.B. Blyuss | Stochastic effects in autoimmune dynamics | 27 pages, 5 figures | Frontiers in Physiology 9, 45 (2018) | 10.3389/fphys.2018.00045 | null | q-bio.TO q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Among various possible causes of autoimmune disease, an important role is
played by infections that can result in a breakdown of immune tolerance,
primarily through the mechanism of "molecular mimicry". In this paper we
propose and analyse a stochastic model of immune response to a viral infection
and subsequent autoimmunity, with account for the populations of T cells with
different activation thresholds, regulatory T cells, and cytokines. We show
analytically and numerically how stochasticity can result in sustained
oscillations around deterministically stable steady states, and we also
investigate stochastic dynamics in the regime of bi-stability. These results
provide a possible explanation for experimentally observed variations in the
progression of autoimmune disease. Computations of the variance of stochastic
fluctuations provide practically important insights into how the size of these
fluctuations depends on various biological parameters, and this also gives a
headway for comparison with experimental data on variation in the observed
numbers of T cells and organ cells affected by infection.
| [
{
"created": "Fri, 11 May 2018 20:00:18 GMT",
"version": "v1"
}
] | 2018-05-21 | [
[
"Fatehi",
"F.",
""
],
[
"Kyrychko",
"S. N.",
""
],
[
"Ross",
"A.",
""
],
[
"Kyrychko",
"Y. N.",
""
],
[
"Blyuss",
"K. B.",
""
]
] | Among various possible causes of autoimmune disease, an important role is played by infections that can result in a breakdown of immune tolerance, primarily through the mechanism of "molecular mimicry". In this paper we propose and analyse a stochastic model of immune response to a viral infection and subsequent autoimmunity, with account for the populations of T cells with different activation thresholds, regulatory T cells, and cytokines. We show analytically and numerically how stochasticity can result in sustained oscillations around deterministically stable steady states, and we also investigate stochastic dynamics in the regime of bi-stability. These results provide a possible explanation for experimentally observed variations in the progression of autoimmune disease. Computations of the variance of stochastic fluctuations provide practically important insights into how the size of these fluctuations depends on various biological parameters, and this also gives a headway for comparison with experimental data on variation in the observed numbers of T cells and organ cells affected by infection. |
1705.06595 | Arthur Lustig | In Joon Baek, Daniel S. Moss, Arthur J. Lustig | The mre11 A470 Alleles Influence the. Heritability and Segregation of
Telosomes in Saccharomyces cerevisiae | 30 ds pages, 9 regular figures, 6 supplementary figures, raw data,
iupdate of submit/1892479 | PLoS ONE 12(9): e0183549.,2017 | 10.1371/journal.pone.0183549 | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Telomeres, the nucleoprotein complexes at the termini of linear chromosomes,
are essential for the processes of end replication, end-protection, and
chromatin segregation. The Mre11 complex is involved in multiple cellular roles
in DNA repair and structure in the regulation and function of telomere size
homeostasis. In this study, we characterize yeast telomere chromatin structure,
phenotypic heritability, and chromatin segregation in both wild-type [MRE11]
and A470 motif alleles. MRE11 strains confer a telomere size of 300 base pairs
of G+T irregular simple sequence repeats. This DNA and a portion of
subtelomeric DNA is embedded in a telosome: an MNase-resistant non-nucleosomal
particle. Chromatin immunoprecipitation shows a three to four-fold lower
occupancy of Mre11A470T proteins than wild-type proteins in telosomes.
Telosomes containing the Mre11A470T protein confer a greater resistance to
MNase digestion than wild-type telosomes. The integration of a wild-type MRE11
allele into an ectopic locus in the genome of a mre11A470T mutant and the
introduction of a mre11A470T allele at an ectopic site in a wild-type strain
lead to unexpectedly differing results. In each case, the replicated sister
chromatids inherit telosomes containing only the protein encoded by the genomic
mre11 locus, even in the presence of protein encoded by the opposing ectopic
allele. We hypothesize that the telosome segregates by a conservative
mechanism. These data support a mechanism for the linkage between sister
chromatid replication and maintenance of either identical mutant or identical
wild-type telosomes after replication of sister chromatids. These data suggest
the presence of an active mechanism for chromatin segregation in yeast.
| [
{
"created": "Thu, 18 May 2017 13:52:21 GMT",
"version": "v1"
},
{
"created": "Fri, 30 Jun 2017 14:33:27 GMT",
"version": "v2"
},
{
"created": "Mon, 3 Jul 2017 17:47:45 GMT",
"version": "v3"
},
{
"created": "Sat, 5 Aug 2017 10:54:00 GMT",
"version": "v4"
},
{
"cre... | 2018-05-01 | [
[
"Baek",
"In Joon",
""
],
[
"Moss",
"Daniel S.",
""
],
[
"Lustig",
"Arthur J.",
""
]
] | Telomeres, the nucleoprotein complexes at the termini of linear chromosomes, are essential for the processes of end replication, end-protection, and chromatin segregation. The Mre11 complex is involved in multiple cellular roles in DNA repair and structure in the regulation and function of telomere size homeostasis. In this study, we characterize yeast telomere chromatin structure, phenotypic heritability, and chromatin segregation in both wild-type [MRE11] and A470 motif alleles. MRE11 strains confer a telomere size of 300 base pairs of G+T irregular simple sequence repeats. This DNA and a portion of subtelomeric DNA is embedded in a telosome: an MNase-resistant non-nucleosomal particle. Chromatin immunoprecipitation shows a three to four-fold lower occupancy of Mre11A470T proteins than wild-type proteins in telosomes. Telosomes containing the Mre11A470T protein confer a greater resistance to MNase digestion than wild-type telosomes. The integration of a wild-type MRE11 allele into an ectopic locus in the genome of a mre11A470T mutant and the introduction of a mre11A470T allele at an ectopic site in a wild-type strain lead to unexpectedly differing results. In each case, the replicated sister chromatids inherit telosomes containing only the protein encoded by the genomic mre11 locus, even in the presence of protein encoded by the opposing ectopic allele. We hypothesize that the telosome segregates by a conservative mechanism. These data support a mechanism for the linkage between sister chromatid replication and maintenance of either identical mutant or identical wild-type telosomes after replication of sister chromatids. These data suggest the presence of an active mechanism for chromatin segregation in yeast. |
1306.5148 | Antonio Scialdone | Antonio Scialdone (1), Sam T. Mugford (1), Doreen Feike, Alastair
Skeffington, Philippa Borrill, Alexander Graf, Alison M. Smith, Martin Howard
((1) contributed equally) | Arabidopsis plants perform arithmetic division to prevent starvation at
night | To be published in eLIFE | null | 10.7554/elife.00669 | null | q-bio.QM | http://creativecommons.org/licenses/by/3.0/ | Photosynthetic starch reserves that accumulate in Arabidopsis leaves during
the day decrease approximately linearly with time at night to support
metabolism and growth. We find that the rate of decrease is adjusted to
accommodate variation in the time of onset of darkness and starch content, such
that reserves last almost precisely until dawn. Generation of these dynamics
therefore requires an arithmetic division computation between the starch
content and expected time to dawn. We introduce two novel chemical kinetic
models capable of implementing analog arithmetic division. Predictions from the
models are successfully tested in plants perturbed by a night-time light period
or by mutations in starch degradation pathways. Our experiments indicate which
components of the starch degradation apparatus may be important for appropriate
arithmetic division. Our results are potentially relevant for any biological
system dependent on a food reserve for survival over a predictable time period.
| [
{
"created": "Fri, 21 Jun 2013 14:22:13 GMT",
"version": "v1"
}
] | 2013-06-24 | [
[
"Scialdone",
"Antonio",
"",
"contributed equally"
],
[
"Mugford",
"Sam T.",
"",
"contributed equally"
],
[
"Feike",
"Doreen",
""
],
[
"Skeffington",
"Alastair",
""
],
[
"Borrill",
"Philippa",
""
],
[
"Graf",
"Alexander... | Photosynthetic starch reserves that accumulate in Arabidopsis leaves during the day decrease approximately linearly with time at night to support metabolism and growth. We find that the rate of decrease is adjusted to accommodate variation in the time of onset of darkness and starch content, such that reserves last almost precisely until dawn. Generation of these dynamics therefore requires an arithmetic division computation between the starch content and expected time to dawn. We introduce two novel chemical kinetic models capable of implementing analog arithmetic division. Predictions from the models are successfully tested in plants perturbed by a night-time light period or by mutations in starch degradation pathways. Our experiments indicate which components of the starch degradation apparatus may be important for appropriate arithmetic division. Our results are potentially relevant for any biological system dependent on a food reserve for survival over a predictable time period. |
2402.15696 | Ruth Johnson | Ruth Johnson and Bogdan Pasaniuc | Implications of self-identified race, ethnicity, and genetic ancestry on
genetic association studies in biobanks within health systems | null | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by/4.0/ | Precision medicine aims to create biomedical solutions tailored to specific
factors that affect disease risk and treatment responses within the population.
The success of the genomics era and recent widespread availability of
electronic health records (EHR) has ushered in a new wave of genomic biobanks
connected to EHR databases (EHR-linked biobanks). This perspective aims to
discuss how race, ethnicity, and genetic ancestry are currently utilized to
study common disease variation through genetic association studies. Although
genetic ancestry plays a significant role in shaping the genetic landscape
underlying disease risk in humans, the overall risk of a disease is caused by a
complex combination of environmental, sociocultural, and genetic factors. When
using EHR-linked biobanks to interrogate underlying disease etiology, it is
also important to be aware of how the biases associated with commonly used
descent-associated concepts such as race and ethnicity can propagate to
downstream analyses. We intend for this resource to support researchers who
perform or analyze genetic association studies in the EHR-linked biobank
setting such as those involved in consortium-wide biobanking efforts. We
provide background on how race, ethnicity, and genetic ancestry play a role in
current association studies, highlight considerations where there is no
consensus about best practices, and provide transparency about the current
shortcomings.
| [
{
"created": "Sat, 24 Feb 2024 03:08:08 GMT",
"version": "v1"
}
] | 2024-02-27 | [
[
"Johnson",
"Ruth",
""
],
[
"Pasaniuc",
"Bogdan",
""
]
] | Precision medicine aims to create biomedical solutions tailored to specific factors that affect disease risk and treatment responses within the population. The success of the genomics era and recent widespread availability of electronic health records (EHR) has ushered in a new wave of genomic biobanks connected to EHR databases (EHR-linked biobanks). This perspective aims to discuss how race, ethnicity, and genetic ancestry are currently utilized to study common disease variation through genetic association studies. Although genetic ancestry plays a significant role in shaping the genetic landscape underlying disease risk in humans, the overall risk of a disease is caused by a complex combination of environmental, sociocultural, and genetic factors. When using EHR-linked biobanks to interrogate underlying disease etiology, it is also important to be aware of how the biases associated with commonly used descent-associated concepts such as race and ethnicity can propagate to downstream analyses. We intend for this resource to support researchers who perform or analyze genetic association studies in the EHR-linked biobank setting such as those involved in consortium-wide biobanking efforts. We provide background on how race, ethnicity, and genetic ancestry play a role in current association studies, highlight considerations where there is no consensus about best practices, and provide transparency about the current shortcomings. |
2003.10965 | Changchuan Yin Dr. | Changchuan Yin | Genotyping coronavirus SARS-CoV-2: methods and implications | null | null | null | null | q-bio.GN stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The emerging global infectious COVID-19 coronavirus disease by novel Severe
Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) presents critical threats
to global public health and the economy since it was identified in late
December 2019 in China. The virus has gone through various pathways of
evolution. For understanding the evolution and transmission of SARS-CoV-2,
genotyping of virus isolates is of great importance. We present an accurate
method for effectively genotyping SARS-CoV-2 viruses using complete genomes.
The method employs the multiple sequence alignments of the genome isolates with
the SARS-CoV-2 reference genome. The SNP genotypes are then measured by Jaccard
distances to track the relationship of virus isolates. The genotyping analysis
of SARS-CoV-2 isolates from the globe reveals that specific multiple mutations
are the predominated mutation type during the current epidemic. Our method
serves a promising tool for monitoring and tracking the epidemic of pathogenic
viruses in their gradual and local genetic variations. The genotyping analysis
shows that the genes encoding the S proteins and RNA polymerase, RNA primase,
and nucleoprotein, undergo frequent mutations. These mutations are critical for
vaccine development in disease control.
| [
{
"created": "Tue, 24 Mar 2020 16:41:06 GMT",
"version": "v1"
}
] | 2020-03-25 | [
[
"Yin",
"Changchuan",
""
]
] | The emerging global infectious COVID-19 coronavirus disease by novel Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) presents critical threats to global public health and the economy since it was identified in late December 2019 in China. The virus has gone through various pathways of evolution. For understanding the evolution and transmission of SARS-CoV-2, genotyping of virus isolates is of great importance. We present an accurate method for effectively genotyping SARS-CoV-2 viruses using complete genomes. The method employs the multiple sequence alignments of the genome isolates with the SARS-CoV-2 reference genome. The SNP genotypes are then measured by Jaccard distances to track the relationship of virus isolates. The genotyping analysis of SARS-CoV-2 isolates from the globe reveals that specific multiple mutations are the predominated mutation type during the current epidemic. Our method serves a promising tool for monitoring and tracking the epidemic of pathogenic viruses in their gradual and local genetic variations. The genotyping analysis shows that the genes encoding the S proteins and RNA polymerase, RNA primase, and nucleoprotein, undergo frequent mutations. These mutations are critical for vaccine development in disease control. |
q-bio/0312031 | Edward M. Drobyshevski | E.M.Drobyshevski | Galilean Satellites as Sites for Incipient Life, and the Earth as its
Shelter | 19 pages | Astrobiology in Russia (Proc. Intnl. Workshop, March 25-29, 2002,
St-Petersburg, Russia), M.B.Simakov and A.K. Pavlov (eds.), pp.47-62 | null | null | q-bio.OT | null | Numerous problems connected with an assumption of the life origin on the
Earth do not arise on Galilean satellites. Here, in presence of a practically
non-salt water and of a great deal (~5-10%) of abiogenic organics, a great
diversity of conditions, which are unthinkable for the Earth, were realized
more than once. They were caused by global electrochemical processes in the
magnetic field presence what could entail an absolute enantiomeric synthesis.
The subsequent explosions of the satellites' icy envelopes saturated by the
electrolysis products resulted in appearance of hot massive atmospheres and
warm deep oceans and ejection of the dirty ice fragments (=comet nuclei), what
led to the material exchange with other bodies, etc.
| [
{
"created": "Fri, 19 Dec 2003 15:01:46 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Dec 2003 16:42:50 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Drobyshevski",
"E. M.",
""
]
] | Numerous problems connected with an assumption of the life origin on the Earth do not arise on Galilean satellites. Here, in presence of a practically non-salt water and of a great deal (~5-10%) of abiogenic organics, a great diversity of conditions, which are unthinkable for the Earth, were realized more than once. They were caused by global electrochemical processes in the magnetic field presence what could entail an absolute enantiomeric synthesis. The subsequent explosions of the satellites' icy envelopes saturated by the electrolysis products resulted in appearance of hot massive atmospheres and warm deep oceans and ejection of the dirty ice fragments (=comet nuclei), what led to the material exchange with other bodies, etc. |
2006.00926 | Juan Alberto Gonzalez Cuevas | Juan A. Gonzalez Cuevas | SEI1I2HRSVM model applied to the coronavirus pandemic (COVID-19) in
Paraguay | in Spanish | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the present article a mathematical model is proposed to research the
current coronavirus outbreak (SARS-CoV-2) in Paraguay, describing the multiple
transmission paths in the infection dynamics, following the susceptible,
exposed, infectious, hospitalized, with lost immunity and recovered
individuals, as well as the role of the virus in the environment and deaths
from COVID-19 or other reasons. In order to reflect the impact of the control
measures adopted by the government and the population, the model employs
variable transmission rates that change with the epidemiological status and
environmental conditions. The model is validated, and its application is
demonstrated with data publicly available.
| [
{
"created": "Mon, 1 Jun 2020 13:21:55 GMT",
"version": "v1"
}
] | 2020-06-02 | [
[
"Cuevas",
"Juan A. Gonzalez",
""
]
] | In the present article a mathematical model is proposed to research the current coronavirus outbreak (SARS-CoV-2) in Paraguay, describing the multiple transmission paths in the infection dynamics, following the susceptible, exposed, infectious, hospitalized, with lost immunity and recovered individuals, as well as the role of the virus in the environment and deaths from COVID-19 or other reasons. In order to reflect the impact of the control measures adopted by the government and the population, the model employs variable transmission rates that change with the epidemiological status and environmental conditions. The model is validated, and its application is demonstrated with data publicly available. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.