id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0912.5369 | Dante Chialvo | Dietmar Plenz and Dante R. Chialvo | Scaling properties of neuronal avalanches are consistent with critical
dynamics | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Complex systems, when poised near a critical point of a phase transition
between order and disorder, exhibit a dynamics comprising a scale-free mixture
of order and disorder which is universal, i.e. system-independent (1-5). It
allows systems at criticality to adapt swiftly to environmental changes (i.e.,
high susceptibility) as well as to flexibly process and store information.
These unique properties prompted the conjecture that the brain might operate at
criticality (1), a view supported by the recent description of neuronal
avalanches in cortex in vitro (6-8), in anesthetized rats (9) and awake
primates (10), and in neuronal models (11-16). Despite the attractiveness of
this idea, its validity is hampered by the fact that its theoretical
underpinning relies solely on the replication of sizes and durations of
avalanches, which reflect only a portion of the rich dynamics found at
criticality. Here we show experimentally five fundamental properties of
avalanches consistent with criticality: (1) a separation of time scales, in
which the power law probability density of avalanches sizes s, P(s) and the
lifetime distribution of avalanches are invariant to slow, external driving;
(2) stationary P(s) over time; (3) the avalanche probabilities preceding and
following main avalanches obey Omori law (17, 18) for earthquakes; (4) the
average size of avalanches following a main avalanche decays as a power law;
(5) the spatial spread of avalanches has a fractal dimension and obeys
finite-size scaling. Thus, neuronal avalanches are a robust manifestation of
criticality in the brain.
| [
{
"created": "Tue, 29 Dec 2009 21:05:31 GMT",
"version": "v1"
}
] | 2009-12-31 | [
[
"Plenz",
"Dietmar",
""
],
[
"Chialvo",
"Dante R.",
""
]
] | Complex systems, when poised near a critical point of a phase transition between order and disorder, exhibit a dynamics comprising a scale-free mixture of order and disorder which is universal, i.e. system-independent (1-5). It allows systems at criticality to adapt swiftly to environmental changes (i.e., high susceptibility) as well as to flexibly process and store information. These unique properties prompted the conjecture that the brain might operate at criticality (1), a view supported by the recent description of neuronal avalanches in cortex in vitro (6-8), in anesthetized rats (9) and awake primates (10), and in neuronal models (11-16). Despite the attractiveness of this idea, its validity is hampered by the fact that its theoretical underpinning relies solely on the replication of sizes and durations of avalanches, which reflect only a portion of the rich dynamics found at criticality. Here we show experimentally five fundamental properties of avalanches consistent with criticality: (1) a separation of time scales, in which the power law probability density of avalanches sizes s, P(s) and the lifetime distribution of avalanches are invariant to slow, external driving; (2) stationary P(s) over time; (3) the avalanche probabilities preceding and following main avalanches obey Omori law (17, 18) for earthquakes; (4) the average size of avalanches following a main avalanche decays as a power law; (5) the spatial spread of avalanches has a fractal dimension and obeys finite-size scaling. Thus, neuronal avalanches are a robust manifestation of criticality in the brain. |
1811.00324 | Puneet Singh | Puneet Singh, Amit Chatterjee, Vimal Bhatia and Shashi Prakash | Application of laser biospeckle analysis for assessment of seed priming
techniques | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Seed priming is one of the well-established and low cost method to improve
seed germination properties, productivity, and stress tolerance in different
crops. It is a pre-germination treatment that partially hydrates the seed and
allows controlled imbibition. This stimulates and induces initial germination
process, but prevents radicle emergence. Consequently, treated seeds are
fortified with enhanced germination characteristics, improved physiological
parameters, uniformity in growth, and improved capability to cope up with
different biotic and abiotic stresses. Existing techniques for evaluating the
effectiveness of seed priming suffer from several drawbacks, including very
high operating time, indirect and destructive analysis, bulky experimental
arrangement, high cost, and require extensive analytical expertise. To
circumvent these drawbacks, we propose a biospeckle based technique to analyse
the effects of different priming treatments on germination characteristics of
seeds. The study employs non-primed (T0) and priming treatments (T1-T75),
including hydropriming and chemical priming (using three chemical agents namely
sodium chloride, potassium nitrate, and urea) for different time durations and
solution concentrations. The results conclusively establish biospeckle analysis
as an efficient active tool for seed priming analysis. Furthermore, the
proposed setup is extremely simple, low-cost, involves non-mechanical scanning
and is highly stable.
| [
{
"created": "Thu, 1 Nov 2018 11:46:36 GMT",
"version": "v1"
}
] | 2018-11-02 | [
[
"Singh",
"Puneet",
""
],
[
"Chatterjee",
"Amit",
""
],
[
"Bhatia",
"Vimal",
""
],
[
"Prakash",
"Shashi",
""
]
] | Seed priming is one of the well-established and low cost method to improve seed germination properties, productivity, and stress tolerance in different crops. It is a pre-germination treatment that partially hydrates the seed and allows controlled imbibition. This stimulates and induces initial germination process, but prevents radicle emergence. Consequently, treated seeds are fortified with enhanced germination characteristics, improved physiological parameters, uniformity in growth, and improved capability to cope up with different biotic and abiotic stresses. Existing techniques for evaluating the effectiveness of seed priming suffer from several drawbacks, including very high operating time, indirect and destructive analysis, bulky experimental arrangement, high cost, and require extensive analytical expertise. To circumvent these drawbacks, we propose a biospeckle based technique to analyse the effects of different priming treatments on germination characteristics of seeds. The study employs non-primed (T0) and priming treatments (T1-T75), including hydropriming and chemical priming (using three chemical agents namely sodium chloride, potassium nitrate, and urea) for different time durations and solution concentrations. The results conclusively establish biospeckle analysis as an efficient active tool for seed priming analysis. Furthermore, the proposed setup is extremely simple, low-cost, involves non-mechanical scanning and is highly stable. |
2103.00077 | Jorge Vila | Jorge A. Vila | Thoughts on the Proteins Native State | 10 pages, 1 figure | null | null | null | q-bio.BM q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | The presence of metamorphism in the protein's native state is not yet fully
understood. In an attempt to throw light on this issue here we present an
assessment, in terms of the amide hydrogen exchange protection factor, that
aims to determine the likely existence of structural fluctuations in the
native-state consistent with both the upper bound marginal stability of
proteins and the metamorphism presence. The preliminary results enable us to
conclude that the native-state metamorphism is, indeed, more probable than
thought.
| [
{
"created": "Fri, 26 Feb 2021 22:47:27 GMT",
"version": "v1"
},
{
"created": "Sat, 6 Mar 2021 15:33:10 GMT",
"version": "v2"
},
{
"created": "Thu, 20 May 2021 12:12:28 GMT",
"version": "v3"
}
] | 2021-05-21 | [
[
"Vila",
"Jorge A.",
""
]
] | The presence of metamorphism in the protein's native state is not yet fully understood. In an attempt to throw light on this issue here we present an assessment, in terms of the amide hydrogen exchange protection factor, that aims to determine the likely existence of structural fluctuations in the native-state consistent with both the upper bound marginal stability of proteins and the metamorphism presence. The preliminary results enable us to conclude that the native-state metamorphism is, indeed, more probable than thought. |
2001.03965 | Adam Mahdi | George Qian and Adam Mahdi | Sensitivity analysis methods in the biomedical sciences | 7 figures | null | null | null | q-bio.QM stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sensitivity analysis is an important part of a mathematical modeller's
toolbox for model analysis. In this review paper, we describe the most
frequently used sensitivity techniques, discussing their advantages and
limitations, before applying each method to a simple model. Also included is a
summary of current software packages, as well as a modeller's guide for
carrying out sensitivity analyses. Finally, we apply the popular Morris and
Sobol methods to two models with biomedical applications, with the intention of
providing a deeper understanding behind both the principles of these methods
and the presentation of their results.
| [
{
"created": "Sun, 12 Jan 2020 17:44:16 GMT",
"version": "v1"
}
] | 2020-01-14 | [
[
"Qian",
"George",
""
],
[
"Mahdi",
"Adam",
""
]
] | Sensitivity analysis is an important part of a mathematical modeller's toolbox for model analysis. In this review paper, we describe the most frequently used sensitivity techniques, discussing their advantages and limitations, before applying each method to a simple model. Also included is a summary of current software packages, as well as a modeller's guide for carrying out sensitivity analyses. Finally, we apply the popular Morris and Sobol methods to two models with biomedical applications, with the intention of providing a deeper understanding behind both the principles of these methods and the presentation of their results. |
1701.05210 | Carrie Manore | Carrie A. Manore, Miranda I. Teboh-Ewungkem, Olivia Prosper, Angela L.
Peace, Katharine Gurski, Zhilan Feng | Intermittent Preventive Treatment (IPT): Its role in averting
disease-induced mortalities in children and in promoting the spread of
antimalarial drug resistance | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop a variable population age-structured ODE model to investigate the
role of Intermittent Preventive Treatment (IPT) in averting malaria-induced
mortalities in children, as well as its related cost in promoting the spread of
anti-malarial drug resistance. IPT, a malaria control strategy in which a full
curative dose of an antimalarial medication is administered to vulnerable
asymptomatic individuals at specified intervals, has been shown to have a
positive impact on reducing malaria transmission and deaths in children and
pregnant women. However, it can also promote drug resistance spread. Our
mathematical model is used to explore IPT effects on drug resistance in
holoendemic malaria regions while quantifying the benefits in deaths averted.
Our model includes both drug-sensitive and drug-resistant strains of the
parasite as well as interactions between human hosts and mosquitoes. The basic
reproduction numbers for both strains as well as the invasion reproduction
numbers are derived and used to examine the role of IPT on drug resistance.
Numerical simulations show the individual and combined effects of IPT and
treatment of symptomatic infections on the prevalence levels of both parasite
strains and on the number of lives saved. The results suggest that while IPT
can indeed save lives, particularly in the high transmission region, certain
combinations of drugs used for IPT and drugs used to treat symptomatic
infection may result in more deaths when resistant parasite strains are
circulating. Moreover, the half-lives of the treatment and IPT drugs used play
an important role in the extent to which IPT may influence the rate of spread
of the resistant strain. A sensitivity analysis indicates the model outcomes
are most sensitive to the reduction factor of transmission for the resistant
strain, rate of immunity loss, and the clearance rate of sensitive infections.
| [
{
"created": "Wed, 18 Jan 2017 19:12:34 GMT",
"version": "v1"
}
] | 2017-01-20 | [
[
"Manore",
"Carrie A.",
""
],
[
"Teboh-Ewungkem",
"Miranda I.",
""
],
[
"Prosper",
"Olivia",
""
],
[
"Peace",
"Angela L.",
""
],
[
"Gurski",
"Katharine",
""
],
[
"Feng",
"Zhilan",
""
]
] | We develop a variable population age-structured ODE model to investigate the role of Intermittent Preventive Treatment (IPT) in averting malaria-induced mortalities in children, as well as its related cost in promoting the spread of anti-malarial drug resistance. IPT, a malaria control strategy in which a full curative dose of an antimalarial medication is administered to vulnerable asymptomatic individuals at specified intervals, has been shown to have a positive impact on reducing malaria transmission and deaths in children and pregnant women. However, it can also promote drug resistance spread. Our mathematical model is used to explore IPT effects on drug resistance in holoendemic malaria regions while quantifying the benefits in deaths averted. Our model includes both drug-sensitive and drug-resistant strains of the parasite as well as interactions between human hosts and mosquitoes. The basic reproduction numbers for both strains as well as the invasion reproduction numbers are derived and used to examine the role of IPT on drug resistance. Numerical simulations show the individual and combined effects of IPT and treatment of symptomatic infections on the prevalence levels of both parasite strains and on the number of lives saved. The results suggest that while IPT can indeed save lives, particularly in the high transmission region, certain combinations of drugs used for IPT and drugs used to treat symptomatic infection may result in more deaths when resistant parasite strains are circulating. Moreover, the half-lives of the treatment and IPT drugs used play an important role in the extent to which IPT may influence the rate of spread of the resistant strain. A sensitivity analysis indicates the model outcomes are most sensitive to the reduction factor of transmission for the resistant strain, rate of immunity loss, and the clearance rate of sensitive infections. |
2203.17200 | Magali Andreia Rossi | Magali Andreia Rossi and Sylviane da Silva Vitor | Restoring Vision through Retinal Implants -- A Systematic Literature
Review | 15 pages, 5 figures, 3 tables | null | 10.48550/arXiv.2203.17200 | null | q-bio.NC eess.IV q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This work presents a bunched of promising technologies to treat blind people:
the bionic eyes. The strategy is to combine a retina implant with software
capable to interpret the information received. Along this line of thinking,
projects such as Retinal Prosthetic Strategy with the Capacity to Restore
Normal Vision from Weill Medical College of Cornell University Project, Update
on Retinal Prosthetic Research from The Boston Retinal Implant Project, and
Restoration of Vision Using Wireless Cortical Implants from Monash Vision Group
Project, have shown in a different context the use of technologies that commits
to bring the vision through its use.
| [
{
"created": "Thu, 31 Mar 2022 17:22:28 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Apr 2022 19:56:38 GMT",
"version": "v2"
}
] | 2022-04-06 | [
[
"Rossi",
"Magali Andreia",
""
],
[
"Vitor",
"Sylviane da Silva",
""
]
] | This work presents a bunched of promising technologies to treat blind people: the bionic eyes. The strategy is to combine a retina implant with software capable to interpret the information received. Along this line of thinking, projects such as Retinal Prosthetic Strategy with the Capacity to Restore Normal Vision from Weill Medical College of Cornell University Project, Update on Retinal Prosthetic Research from The Boston Retinal Implant Project, and Restoration of Vision Using Wireless Cortical Implants from Monash Vision Group Project, have shown in a different context the use of technologies that commits to bring the vision through its use. |
1205.1739 | Shunsuke Shimobayashi freedom | Shunsuke F. Shimobayashi, Takafumi Iwaki, Toshiaki Mori and Kenichi
Yoshikawa | The probability of double-strand breaks in giant DNA decreases markedly
as the DNA concentration increases | null | null | 10.1063/1.4802993 | null | q-bio.BM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | DNA double-strand breaks (DSBs) represent a serious source of damage for all
living things and thus there have been many quantitative studies of DSBs both
in vivo and in vitro. Despite this fact, the processes that lead to their
production have not yet been clearly understood, and there is no established
theory that can account for the statistics of their production, in particular,
the number of DSBs per base pair per unit Gy, here denoted by P1, which is the
most important parameter for evaluating the degree of risk posed by DSBs. Here,
using the single-molecule observation method with giant DNA molecules (166
kbp), we evaluate the number of DSBs caused by gamma-ray irradiation. We find
that P1 is nearly inversely proportional to the DNA concentration above a
certain threshold DNA concentration. A simple model that accounts for the
marked decrease of P1 shows that it is necessary to consider the
characteristics of giant DNA molecules as semiflexible polymers to interpret
the intrinsic mechanism of DSBs.
| [
{
"created": "Tue, 8 May 2012 16:43:07 GMT",
"version": "v1"
}
] | 2015-06-05 | [
[
"Shimobayashi",
"Shunsuke F.",
""
],
[
"Iwaki",
"Takafumi",
""
],
[
"Mori",
"Toshiaki",
""
],
[
"Yoshikawa",
"Kenichi",
""
]
] | DNA double-strand breaks (DSBs) represent a serious source of damage for all living things and thus there have been many quantitative studies of DSBs both in vivo and in vitro. Despite this fact, the processes that lead to their production have not yet been clearly understood, and there is no established theory that can account for the statistics of their production, in particular, the number of DSBs per base pair per unit Gy, here denoted by P1, which is the most important parameter for evaluating the degree of risk posed by DSBs. Here, using the single-molecule observation method with giant DNA molecules (166 kbp), we evaluate the number of DSBs caused by gamma-ray irradiation. We find that P1 is nearly inversely proportional to the DNA concentration above a certain threshold DNA concentration. A simple model that accounts for the marked decrease of P1 shows that it is necessary to consider the characteristics of giant DNA molecules as semiflexible polymers to interpret the intrinsic mechanism of DSBs. |
2006.00045 | Santosh Ansumali | Shaurya Kaushal, Abhineet Singh Rajput, Soumyadeep Bhattacharya, M.
Vidyasagar, Aloke Kumar, Meher K. Prakash, Santosh Ansumali | Estimating Hidden Asymptomatics, Herd Immunity Threshold and Lockdown
Effects using a COVID-19 Specific Model | null | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A quantitative COVID-19 model that incorporates hidden asymptomatic patients
is developed, and an analytic solution in parametric form is given. The model
incorporates the impact of lockdown and resulting spatial migration of
population due to announcement of lockdown. A method is presented for
estimating the model parameters from real-world data. It is shown that increase
of infections slows down and herd immunity is achieved when symptomatic
patients are 4-6\% of the population for the European countries we studied,
when the total infected fraction is between 50-56 \%. Finally, a method for
estimating the number of asymptomatic patients, who have been the key hidden
link in the spread of the infections, is presented.
| [
{
"created": "Fri, 29 May 2020 19:27:42 GMT",
"version": "v1"
}
] | 2020-06-02 | [
[
"Kaushal",
"Shaurya",
""
],
[
"Rajput",
"Abhineet Singh",
""
],
[
"Bhattacharya",
"Soumyadeep",
""
],
[
"Vidyasagar",
"M.",
""
],
[
"Kumar",
"Aloke",
""
],
[
"Prakash",
"Meher K.",
""
],
[
"Ansumali",
"Santosh",
""
]
] | A quantitative COVID-19 model that incorporates hidden asymptomatic patients is developed, and an analytic solution in parametric form is given. The model incorporates the impact of lockdown and resulting spatial migration of population due to announcement of lockdown. A method is presented for estimating the model parameters from real-world data. It is shown that increase of infections slows down and herd immunity is achieved when symptomatic patients are 4-6\% of the population for the European countries we studied, when the total infected fraction is between 50-56 \%. Finally, a method for estimating the number of asymptomatic patients, who have been the key hidden link in the spread of the infections, is presented. |
2303.03917 | Laurence Dufourny | Vincent Hellier, Hugues Dardente, Didier Lomet, Juliette Cogni\'e,
Laurence Dufourny (PRC) | Interactions between $\beta$-endorphin and kisspeptin neurons of the ewe
arcuate nucleus are modulated by photoperiod | null | Journal of Neuroendocrinology, In press | 10.1111/jne.13242 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Opioid peptides are well-known modulators of the central control of
reproduction. Among them, dynorphin coexpressed in kisspeptin (KP) neurons of
the arcuate nucleus (ARC) has been thoroughly studied for its autocrine effect
on KP release through $\kappa$ opioid receptors. Other studies have suggested a
role for $\beta$-endorphin (BEND), a peptide cleaved from the
proopiomelanocortin (POMC) precursor, on food intake and central control of
reproduction. Similarly to KP, BEND content in the ARC of sheep is modulated by
day length and BEND modulates food intake in a dose-dependent manner. As KP
levels in the ARC vary with photoperiodic and metabolic status, a
photoperiod-driven influence of BEND neurons on neighboring KP neurons is
plausible. The aim of this study was therefore to investigate a possible
modulatory action of BEND on KP neurons located in the ovine ARC. Using
confocal microscopy, numerous KP appositions on BEND neurons were found but
there was no photoperiodic variation in the number of these interactions in
ovariectomized, estradiol-replaced ewes. In contrast, BEND terminals on KP
neurons were twice as numerous under short days (SD), in ewes having an
activated gonadotropic axis, as compared to anestrus ewes under long days (LD).
Injection of 5$\mu$g BEND into the third ventricle of SD ewes induced a
significant and specific increase of activated KP neurons (16% versus 9% in
controls) while the percentage of overall activated (c-Fos positive) neurons,
was similar between both groups. These data suggest a photoperiod-dependent
influence of BEND on KP neurons of the ARC, which may influence GnRH pulsatile
secretion and inform KP neurons on the metabolic status.
| [
{
"created": "Tue, 7 Mar 2023 14:27:45 GMT",
"version": "v1"
}
] | 2023-03-08 | [
[
"Hellier",
"Vincent",
"",
"PRC"
],
[
"Dardente",
"Hugues",
"",
"PRC"
],
[
"Lomet",
"Didier",
"",
"PRC"
],
[
"Cognié",
"Juliette",
"",
"PRC"
],
[
"Dufourny",
"Laurence",
"",
"PRC"
]
] | Opioid peptides are well-known modulators of the central control of reproduction. Among them, dynorphin coexpressed in kisspeptin (KP) neurons of the arcuate nucleus (ARC) has been thoroughly studied for its autocrine effect on KP release through $\kappa$ opioid receptors. Other studies have suggested a role for $\beta$-endorphin (BEND), a peptide cleaved from the proopiomelanocortin (POMC) precursor, on food intake and central control of reproduction. Similarly to KP, BEND content in the ARC of sheep is modulated by day length and BEND modulates food intake in a dose-dependent manner. As KP levels in the ARC vary with photoperiodic and metabolic status, a photoperiod-driven influence of BEND neurons on neighboring KP neurons is plausible. The aim of this study was therefore to investigate a possible modulatory action of BEND on KP neurons located in the ovine ARC. Using confocal microscopy, numerous KP appositions on BEND neurons were found but there was no photoperiodic variation in the number of these interactions in ovariectomized, estradiol-replaced ewes. In contrast, BEND terminals on KP neurons were twice as numerous under short days (SD), in ewes having an activated gonadotropic axis, as compared to anestrus ewes under long days (LD). Injection of 5$\mu$g BEND into the third ventricle of SD ewes induced a significant and specific increase of activated KP neurons (16% versus 9% in controls) while the percentage of overall activated (c-Fos positive) neurons, was similar between both groups. These data suggest a photoperiod-dependent influence of BEND on KP neurons of the ARC, which may influence GnRH pulsatile secretion and inform KP neurons on the metabolic status. |
1511.09364 | Maximilian Schmidt | Maximilian Schmidt, Rembrandt Bakker, Kelly Shen, Gleb Bezgin,
Claus-Christian Hilgetag, Markus Diesmann and Sacha J. van Albada | Full-density multi-scale account of structure and dynamics of macaque
visual cortex | null | null | 10.1371/journal.pcbi.1006359 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a multi-scale spiking network model of all vision-related areas of
macaque cortex that represents each area by a full-scale microcircuit with
area-specific architecture. The layer- and population-resolved network
connectivity integrates axonal tracing data from the CoCoMac database with
recent quantitative tracing data, and is systematically refined using dynamical
constraints. Simulations reveal a stable asynchronous irregular ground state
with heterogeneous activity across areas, layers, and populations. Elicited by
large-scale interactions, the model reproduces longer intrinsic time scales in
higher compared to early visual areas. Activity propagates down the visual
hierarchy, similar to experimental results associated with visual imagery.
Cortico-cortical interaction patterns agree well with fMRI resting-state
functional connectivity. The model bridges the gap between local and
large-scale accounts of cortex, and clarifies how the detailed connectivity of
cortex shapes its dynamics on multiple scales.
| [
{
"created": "Mon, 30 Nov 2015 16:06:40 GMT",
"version": "v1"
},
{
"created": "Tue, 1 Dec 2015 15:25:06 GMT",
"version": "v2"
},
{
"created": "Mon, 8 Feb 2016 19:20:50 GMT",
"version": "v3"
},
{
"created": "Fri, 15 Apr 2016 08:05:14 GMT",
"version": "v4"
}
] | 2018-10-23 | [
[
"Schmidt",
"Maximilian",
""
],
[
"Bakker",
"Rembrandt",
""
],
[
"Shen",
"Kelly",
""
],
[
"Bezgin",
"Gleb",
""
],
[
"Hilgetag",
"Claus-Christian",
""
],
[
"Diesmann",
"Markus",
""
],
[
"van Albada",
"Sacha J.",
""
]
] | We present a multi-scale spiking network model of all vision-related areas of macaque cortex that represents each area by a full-scale microcircuit with area-specific architecture. The layer- and population-resolved network connectivity integrates axonal tracing data from the CoCoMac database with recent quantitative tracing data, and is systematically refined using dynamical constraints. Simulations reveal a stable asynchronous irregular ground state with heterogeneous activity across areas, layers, and populations. Elicited by large-scale interactions, the model reproduces longer intrinsic time scales in higher compared to early visual areas. Activity propagates down the visual hierarchy, similar to experimental results associated with visual imagery. Cortico-cortical interaction patterns agree well with fMRI resting-state functional connectivity. The model bridges the gap between local and large-scale accounts of cortex, and clarifies how the detailed connectivity of cortex shapes its dynamics on multiple scales. |
1908.07884 | Gurdip Uppal | Gurdip Uppal and Dervis Can Vural | Evolution of specialized microbial cooperation in dynamic fluids | 15 pages, 6 figures | null | 10.1111/jeb.13593 | null | q-bio.PE physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Here, we study the evolution of specialization using realistic computer
simulations of bacteria that secrete two public goods in a dynamic fluid.
Through this first principles approach, we find physical factors such as
diffusion, flow patterns, and decay rates are as influential as fitness
economics in governing the evolution of community structure, to the extent that
when mechanical factors are taken into account, (1) Generalist communities can
resist becoming specialists, despite the invasion fitness of specialization,
(2) Generalist and specialists can both resist cheaters despite the invasion
fitness of free-riding, (3) Multiple community structures can coexist despite
the opposing force of competitive exclusion. Our results emphasize the role of
spatial assortment and physical forces on niche partitioning and the evolution
of diverse community structures.
| [
{
"created": "Wed, 21 Aug 2019 14:13:10 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Jan 2020 20:01:36 GMT",
"version": "v2"
}
] | 2020-02-03 | [
[
"Uppal",
"Gurdip",
""
],
[
"Vural",
"Dervis Can",
""
]
] | Here, we study the evolution of specialization using realistic computer simulations of bacteria that secrete two public goods in a dynamic fluid. Through this first principles approach, we find physical factors such as diffusion, flow patterns, and decay rates are as influential as fitness economics in governing the evolution of community structure, to the extent that when mechanical factors are taken into account, (1) Generalist communities can resist becoming specialists, despite the invasion fitness of specialization, (2) Generalist and specialists can both resist cheaters despite the invasion fitness of free-riding, (3) Multiple community structures can coexist despite the opposing force of competitive exclusion. Our results emphasize the role of spatial assortment and physical forces on niche partitioning and the evolution of diverse community structures. |
1910.00734 | Navid Mohammad Mirzaei | Navid Mohammad Mirzaei and Pak-Wing Fok and William S. Weintraub | An integrated approach to simulating the vulnerable atherosclerotic
plaque | 22 pages, 6 figures, 2 tables | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analyses of individual atherosclerotic plaques are mostly descriptive,
relying -- for example -- on histological classification by spectral analysis
of ultrasound waves or staining and observing particular cellular components.
Such passive methods have proved useful for characterizing the structure and
vulnerability of plaques but have little quantitative predictive power. In this
viewpoint article, we propose an integrated quantitative framework to
understand the evolution of plaque. The main approach is to use Partial
Differential Equations (PDEs) with macrophages, necrotic cells, oxidized
lipids, oxygen concentration and PDGF as primary variables coupled to a
biomechanical model to describe vessel growth. The model is deterministic,
providing mechanical, morphological, and histological characteristics of an
atherosclerotic vessel at any desired future time point. We discuss the pros
and cons of such a model, how such a model can provide insight to the
clinician, and potentially guide therapy. Finally, we use our model to create
computer-generated animations of a plaque evolution that are in qualitative
agreement with serial ultrasound images published by Kubo et al. (2010) and
hypothesize possible atherogenic mechanisms.
| [
{
"created": "Wed, 2 Oct 2019 01:14:29 GMT",
"version": "v1"
},
{
"created": "Sat, 23 Nov 2019 16:51:09 GMT",
"version": "v2"
}
] | 2019-11-26 | [
[
"Mirzaei",
"Navid Mohammad",
""
],
[
"Fok",
"Pak-Wing",
""
],
[
"Weintraub",
"William S.",
""
]
] | Analyses of individual atherosclerotic plaques are mostly descriptive, relying -- for example -- on histological classification by spectral analysis of ultrasound waves or staining and observing particular cellular components. Such passive methods have proved useful for characterizing the structure and vulnerability of plaques but have little quantitative predictive power. In this viewpoint article, we propose an integrated quantitative framework to understand the evolution of plaque. The main approach is to use Partial Differential Equations (PDEs) with macrophages, necrotic cells, oxidized lipids, oxygen concentration and PDGF as primary variables coupled to a biomechanical model to describe vessel growth. The model is deterministic, providing mechanical, morphological, and histological characteristics of an atherosclerotic vessel at any desired future time point. We discuss the pros and cons of such a model, how such a model can provide insight to the clinician, and potentially guide therapy. Finally, we use our model to create computer-generated animations of a plaque evolution that are in qualitative agreement with serial ultrasound images published by Kubo et al. (2010) and hypothesize possible atherogenic mechanisms. |
1104.0174 | Matteo Convertino | M. Convertino, J.F. Donoghue, M.L. Chu-Agor, G. A. Kiker, R.
Munoz-Carpena, R.A. Fischer, I. Linkov | Anthropogenic Renourishment Feedback on Shorebirds: a Multispecies
Bayesian Perspective | in press; Journal of Ecological Engineering 2011 | null | null | null | q-bio.OT q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper the realized niche of the Snowy Plover (Charadrius
alexandrinus), a primarily resident Florida shorebird, is described as a
function of the scenopoetic and bionomic variables at the nest-, landscape-,
and regional-scale. We identified some pos- sible geomorphological controls
that influence nest-site selection and survival using data collected along the
Florida Gulf coast. In particular we focused on the effects of beach
replenishment interventions on the Snowy Plover (SP), and on the migratory
Piping Plover (PP) (Charadrius melodus) and Red Knot (RK) (Calidris canutus).
To quantify the relationship between past renourishment projects and shorebird
species we used a Monte Carlo procedure to sample from the posterior
distribution of the binomial probabilities that a region is not a nesting or a
wintering ground conditional on the occurrence of a beach replenishment
intervention in the same and the previous year. The results indicate that it
was 2.3, 3.1, and 0.8 times more likely that a region was not a wintering
ground following a year with a renourishment intervention for the SP, PP and RK
respectively. For the SP it was 2.5. times more likely that a region was not a
breeding ground after a renourishment event. Through a maximum entropy
principle model we observed small differences in the habitat use of the SP
during the breeding and the wintering season. However the habitats where RK was
observed ap- peared quite different. Maintaining and creating optimal suitable
habitats for SP characterized by sparse low vegetation in the foredunes areas,
and uneven/low-slope beach surfaces, is the proposed conservation scenario to
convert anthropic beach restorations and SP populations into a positive
feedback without impacting other threatened shorebird species.
| [
{
"created": "Fri, 1 Apr 2011 13:57:06 GMT",
"version": "v1"
}
] | 2011-04-04 | [
[
"Convertino",
"M.",
""
],
[
"Donoghue",
"J. F.",
""
],
[
"Chu-Agor",
"M. L.",
""
],
[
"Kiker",
"G. A.",
""
],
[
"Munoz-Carpena",
"R.",
""
],
[
"Fischer",
"R. A.",
""
],
[
"Linkov",
"I.",
""
]
] | In this paper the realized niche of the Snowy Plover (Charadrius alexandrinus), a primarily resident Florida shorebird, is described as a function of the scenopoetic and bionomic variables at the nest-, landscape-, and regional-scale. We identified some pos- sible geomorphological controls that influence nest-site selection and survival using data collected along the Florida Gulf coast. In particular we focused on the effects of beach replenishment interventions on the Snowy Plover (SP), and on the migratory Piping Plover (PP) (Charadrius melodus) and Red Knot (RK) (Calidris canutus). To quantify the relationship between past renourishment projects and shorebird species we used a Monte Carlo procedure to sample from the posterior distribution of the binomial probabilities that a region is not a nesting or a wintering ground conditional on the occurrence of a beach replenishment intervention in the same and the previous year. The results indicate that it was 2.3, 3.1, and 0.8 times more likely that a region was not a wintering ground following a year with a renourishment intervention for the SP, PP and RK respectively. For the SP it was 2.5. times more likely that a region was not a breeding ground after a renourishment event. Through a maximum entropy principle model we observed small differences in the habitat use of the SP during the breeding and the wintering season. However the habitats where RK was observed ap- peared quite different. Maintaining and creating optimal suitable habitats for SP characterized by sparse low vegetation in the foredunes areas, and uneven/low-slope beach surfaces, is the proposed conservation scenario to convert anthropic beach restorations and SP populations into a positive feedback without impacting other threatened shorebird species. |
0705.2607 | Max Shpak | Max Shpak | Selection Against Demographic Stochasticity in Age-Structured
Populations | null | null | null | null | q-bio.PE | null | It has been shown that differences in fecundity variance can influence the
probability of invasion of a genotype in a population, i.e. a genotype with
lower variance in offspring number can be favored in finite populations even if
it has a somewhat lower mean fitness than a competitor. In this paper,
Gillespie's results are extended to population genetic systems with explicit
age structure, where the demographic variance (variance in growth rate)
calculated in the work of Engen and colleagues is used as a generalization of
"variance in offspring number" to predict the interaction between deterministic
and random forces driving change in allele frequency. By calculating the
variance from the life history parameters, it is shown that selection against
variance in the growth rate will favor a genotypes with lower stochasticity in
age specific survival and fertility rates. A diffusion approximation for
selection and drift in a population with two genotypes with different life
history matrices (and therefore, different growth rates and demographic
variances) is derived and shown to be consistent with individual based
simulations. It is also argued that for finite populations, perturbation
analyses of both the growth rate and demographic variances may be necessary to
determine the sensitivity of "fitness" (broadly defined) to changes in the life
history parameters.
| [
{
"created": "Thu, 17 May 2007 21:44:00 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Shpak",
"Max",
""
]
] | It has been shown that differences in fecundity variance can influence the probability of invasion of a genotype in a population, i.e. a genotype with lower variance in offspring number can be favored in finite populations even if it has a somewhat lower mean fitness than a competitor. In this paper, Gillespie's results are extended to population genetic systems with explicit age structure, where the demographic variance (variance in growth rate) calculated in the work of Engen and colleagues is used as a generalization of "variance in offspring number" to predict the interaction between deterministic and random forces driving change in allele frequency. By calculating the variance from the life history parameters, it is shown that selection against variance in the growth rate will favor a genotypes with lower stochasticity in age specific survival and fertility rates. A diffusion approximation for selection and drift in a population with two genotypes with different life history matrices (and therefore, different growth rates and demographic variances) is derived and shown to be consistent with individual based simulations. It is also argued that for finite populations, perturbation analyses of both the growth rate and demographic variances may be necessary to determine the sensitivity of "fitness" (broadly defined) to changes in the life history parameters. |
1309.2618 | Youdong Mao | Youdong Mao, Luis R. Castillo-Menendez, Joseph Sodroski | Dual-target function validation of single-particle selection from
low-contrast cryo-electron micrographs | 43 pages, 7 figures | BMC Bioinformatics 2019; 20:169 | 10.1186/s12859-019-2714-8 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Weak-signal detection and single-particle selection from low-contrast
micrographs of frozen hydrated biomolecules by cryo-electron microscopy
(cryo-EM) presents a practical challenge. Cryo-EM image contrast degrades as
the size of biomolecules of structural interest decreases. When the image
contrast falls into a range where the location or presence of single particles
becomes ambiguous, a need arises for objective computational approaches to
detect weak signal and to select and verify particles from these low-contrast
micrographs. Here we propose an objective validation scheme for low-contrast
particle selection using a combination of two different target functions. In an
implementation of this dual-target function (DTF) validation, a first target
function of fast local correlation was used to select particles through
template matching, followed by signal validation through a second target
function of maximum likelihood. By a systematic study of simulated data, we
found that such an implementation of DTF validation is capable of selecting and
verifying particles from cryo-EM micrographs with a signal-to-noise ratio as
low as 0.002. Importantly, we demonstrated that DTF validation can robustly
evade over-fitting or reference bias from the particle-picking template,
allowing true signal to emerge from amidst heavy noise in an objective fashion.
The DTF approach allows efficient assembly of a large number of single-particle
cryo-EM images of smaller biomolecules or specimens containing
contrast-degrading agents like detergents in a semi-automatic manner.
| [
{
"created": "Tue, 10 Sep 2013 19:19:48 GMT",
"version": "v1"
}
] | 2019-04-16 | [
[
"Mao",
"Youdong",
""
],
[
"Castillo-Menendez",
"Luis R.",
""
],
[
"Sodroski",
"Joseph",
""
]
] | Weak-signal detection and single-particle selection from low-contrast micrographs of frozen hydrated biomolecules by cryo-electron microscopy (cryo-EM) presents a practical challenge. Cryo-EM image contrast degrades as the size of biomolecules of structural interest decreases. When the image contrast falls into a range where the location or presence of single particles becomes ambiguous, a need arises for objective computational approaches to detect weak signal and to select and verify particles from these low-contrast micrographs. Here we propose an objective validation scheme for low-contrast particle selection using a combination of two different target functions. In an implementation of this dual-target function (DTF) validation, a first target function of fast local correlation was used to select particles through template matching, followed by signal validation through a second target function of maximum likelihood. By a systematic study of simulated data, we found that such an implementation of DTF validation is capable of selecting and verifying particles from cryo-EM micrographs with a signal-to-noise ratio as low as 0.002. Importantly, we demonstrated that DTF validation can robustly evade over-fitting or reference bias from the particle-picking template, allowing true signal to emerge from amidst heavy noise in an objective fashion. The DTF approach allows efficient assembly of a large number of single-particle cryo-EM images of smaller biomolecules or specimens containing contrast-degrading agents like detergents in a semi-automatic manner. |
q-bio/0702048 | Bo Deng | Bo Deng | The Time Invariance Principle, Ecological (Non)Chaos, and A Fundamental
Pitfall of Discrete Modeling | null | null | null | null | q-bio.PE | null | This paper is to show that most discrete models used for population dynamics
in ecology are inherently pathological that their predications cannot be
independently verified by experiments because they violate a fundamental
principle of physics. The result is used to tackle an on-going controversy
regarding ecological chaos. Another implication of the result is that all
continuous dynamical systems must be modeled by differential equations. As a
result it suggests that researches based on discrete modeling must be closely
scrutinized and the teaching of calculus and differential equations must be
emphasized for students of biology.
| [
{
"created": "Fri, 23 Feb 2007 16:36:48 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Mar 2007 19:53:28 GMT",
"version": "v2"
},
{
"created": "Fri, 9 Mar 2007 15:18:21 GMT",
"version": "v3"
}
] | 2007-05-23 | [
[
"Deng",
"Bo",
""
]
] | This paper is to show that most discrete models used for population dynamics in ecology are inherently pathological that their predications cannot be independently verified by experiments because they violate a fundamental principle of physics. The result is used to tackle an on-going controversy regarding ecological chaos. Another implication of the result is that all continuous dynamical systems must be modeled by differential equations. As a result it suggests that researches based on discrete modeling must be closely scrutinized and the teaching of calculus and differential equations must be emphasized for students of biology. |
2305.13414 | Daniel Jorge | Daniel Cardoso Pereira Jorge and Ricardo Martinez-Garcia | Demographic effects of aggregation in the presence of a component Allee
effect | null | null | null | null | q-bio.PE nlin.AO | http://creativecommons.org/licenses/by/4.0/ | Intraspecific interactions are key drivers of population dynamics because
they establish relations between individual fitness and population density. The
component Allee effect is defined as a positive correlation between any fitness
component of a focal organism and population density, and it can lead to
positive density dependence in the population per capita growth rate. The
spatial structure is key to determining whether and to which extent a component
Allee effect will manifest at the demographic level because it determines how
individuals interact with one another. However, existing spatial models to
study the Allee effect impose a fixed spatial structure, which limits our
understanding of how a component Allee effect and the spatial dynamics jointly
determine the existence of demographic Allee effects. To fill this gap, we
introduce a spatially-explicit theoretical framework where spatial structure
and population dynamics are emergent properties of the individual-level
demographic rates. Depending on the intensity of the individual processes the
population exhibits a variety of spatial patterns that determine the
demographic-level by-products of an existing individual-level component Allee
effect. We find that aggregation increases population abundance and allows
populations to survive in harsher environments and at lower global population
densities when compared with uniformly distributed organisms. Moreover,
aggregation can prevent the component Allee effect from manifesting at the
population level or restrict it to the level of each independent group. These
results provide a mechanistic understanding of how component Allee effects
operate for different spatial population structures and show at the population
level. Our results contribute to better understanding population dynamics in
the presence of Allee effects and can potentially inform population management
strategies.
| [
{
"created": "Mon, 22 May 2023 18:57:53 GMT",
"version": "v1"
}
] | 2023-05-24 | [
[
"Jorge",
"Daniel Cardoso Pereira",
""
],
[
"Martinez-Garcia",
"Ricardo",
""
]
] | Intraspecific interactions are key drivers of population dynamics because they establish relations between individual fitness and population density. The component Allee effect is defined as a positive correlation between any fitness component of a focal organism and population density, and it can lead to positive density dependence in the population per capita growth rate. The spatial structure is key to determining whether and to which extent a component Allee effect will manifest at the demographic level because it determines how individuals interact with one another. However, existing spatial models to study the Allee effect impose a fixed spatial structure, which limits our understanding of how a component Allee effect and the spatial dynamics jointly determine the existence of demographic Allee effects. To fill this gap, we introduce a spatially-explicit theoretical framework where spatial structure and population dynamics are emergent properties of the individual-level demographic rates. Depending on the intensity of the individual processes the population exhibits a variety of spatial patterns that determine the demographic-level by-products of an existing individual-level component Allee effect. We find that aggregation increases population abundance and allows populations to survive in harsher environments and at lower global population densities when compared with uniformly distributed organisms. Moreover, aggregation can prevent the component Allee effect from manifesting at the population level or restrict it to the level of each independent group. These results provide a mechanistic understanding of how component Allee effects operate for different spatial population structures and show at the population level. Our results contribute to better understanding population dynamics in the presence of Allee effects and can potentially inform population management strategies. |
1901.03659 | Jan Bartsch | Jan Bartsch, Alfio Borz\`i, Christina Schenk, Dominik Schmidt, Jonas
M\"uller, Volker Schulz, Kai Velten | An extended model of wine fermentation including aromas and acids | null | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The art of viticulture and the quest for making wines has a long tradition
and it just started recently that mathematicians entered this field with their
main contribution of modelling alcoholic fermentation. These models consist of
systems of ordinary differential equations that describe the kinetics of the
bio-chemical reactions occurring in the fermentation process. The aim of this
paper is to present a new model of wine fermentation that accurately describes
the yeast dying component, the presence of glucose transporters, and the
formation of aromas and acids. Therefore the new model could become a valuable
tool to predict the taste of the wine and provide the starting point for an
emerging control technology that aims at improving the quality of the wine by
steering a well-behaved fermentation process that is also energetically more
efficient. Results of numerical simulations are presented that successfully
confirm the validity of the proposed model by comparison with real data.
| [
{
"created": "Sat, 5 Jan 2019 10:25:24 GMT",
"version": "v1"
}
] | 2019-01-14 | [
[
"Bartsch",
"Jan",
""
],
[
"Borzì",
"Alfio",
""
],
[
"Schenk",
"Christina",
""
],
[
"Schmidt",
"Dominik",
""
],
[
"Müller",
"Jonas",
""
],
[
"Schulz",
"Volker",
""
],
[
"Velten",
"Kai",
""
]
] | The art of viticulture and the quest for making wines has a long tradition and it just started recently that mathematicians entered this field with their main contribution of modelling alcoholic fermentation. These models consist of systems of ordinary differential equations that describe the kinetics of the bio-chemical reactions occurring in the fermentation process. The aim of this paper is to present a new model of wine fermentation that accurately describes the yeast dying component, the presence of glucose transporters, and the formation of aromas and acids. Therefore the new model could become a valuable tool to predict the taste of the wine and provide the starting point for an emerging control technology that aims at improving the quality of the wine by steering a well-behaved fermentation process that is also energetically more efficient. Results of numerical simulations are presented that successfully confirm the validity of the proposed model by comparison with real data. |
2106.12022 | Turner Silverthorne | Turner Silverthorne, Edward Saehong Oh, Adam R Stinchcombe | Promoter methylation in a mixed feedback loop circadian clock model | null | null | 10.1103/PhysRevE.105.034411 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce and analyze an extension of the mixed feedback loop model of
Fran\c{c}ois and Hakim. Our extension includes an additional promoter state and
allows for reversible protein sequestration, which was absent from the original
studies of the mixed feedback loop model. Motivated by experimental
observations that link DNA methylation with circadian gene expression, we use
our extended model to investigate the role of DNA methylation(s) in the
mammalian circadian clock. We extend the perturbation analysis of Fran\c{c}ois
and Hakim to determine how methylation affects the presence and the periodicity
of oscillations. We derive a modified Goodwin oscillator model as an
approximation to show that although methylation contributes to period control,
excessive methylation can abolish rhythmicity.
| [
{
"created": "Tue, 22 Jun 2021 19:16:00 GMT",
"version": "v1"
},
{
"created": "Wed, 1 Sep 2021 17:34:58 GMT",
"version": "v2"
}
] | 2022-04-13 | [
[
"Silverthorne",
"Turner",
""
],
[
"Oh",
"Edward Saehong",
""
],
[
"Stinchcombe",
"Adam R",
""
]
] | We introduce and analyze an extension of the mixed feedback loop model of Fran\c{c}ois and Hakim. Our extension includes an additional promoter state and allows for reversible protein sequestration, which was absent from the original studies of the mixed feedback loop model. Motivated by experimental observations that link DNA methylation with circadian gene expression, we use our extended model to investigate the role of DNA methylation(s) in the mammalian circadian clock. We extend the perturbation analysis of Fran\c{c}ois and Hakim to determine how methylation affects the presence and the periodicity of oscillations. We derive a modified Goodwin oscillator model as an approximation to show that although methylation contributes to period control, excessive methylation can abolish rhythmicity. |
1905.02815 | Anton Zadorin | Anton S. Zadorin | Allostery and conformational changes upon binding as generic features of
proteins: a high-dimension geometrical approach | 17 pages, 3 figures | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A growing number of experimental evidence shows that it is general for a
ligand binding protein to have a potential for allosteric regulation and for
further evolution. In addition, such proteins generically change their
conformation upon binding. O. Rivoire has recently proposed an evolutionary
scenario that explains these properties as a generic byproduct of selection for
exquisite discrimination between very similar ligands. The initial claim was
supported by two classes of basic examples: continuous protein models with
small numbers of degrees of freedom, on which the development of a
conformational switch was established, and a 2-dimensional spin glass model
supporting the rest of the statement. This work aimed to clarify the
implication of the exquisite discrimination for smooth models with large number
of degrees of freedom, the situation closer to real biological systems. With
the help of differential geometry, jet-space analysis, and transversality
theorems, it is shown that the claim holds true for any generic flexible system
that can be described in terms of smooth manifolds. The result suggests that,
indeed, evolutionary solutions to the exquisite discrimination problem, if
exist, are located near a codimension-1 subspace of the appropriate genotypical
space. This constraint, in turn, gives rise to a potential for the allosteric
regulation of the discrimination via generic conformational changes upon
binding.
| [
{
"created": "Tue, 7 May 2019 21:24:29 GMT",
"version": "v1"
}
] | 2019-05-09 | [
[
"Zadorin",
"Anton S.",
""
]
] | A growing number of experimental evidence shows that it is general for a ligand binding protein to have a potential for allosteric regulation and for further evolution. In addition, such proteins generically change their conformation upon binding. O. Rivoire has recently proposed an evolutionary scenario that explains these properties as a generic byproduct of selection for exquisite discrimination between very similar ligands. The initial claim was supported by two classes of basic examples: continuous protein models with small numbers of degrees of freedom, on which the development of a conformational switch was established, and a 2-dimensional spin glass model supporting the rest of the statement. This work aimed to clarify the implication of the exquisite discrimination for smooth models with large number of degrees of freedom, the situation closer to real biological systems. With the help of differential geometry, jet-space analysis, and transversality theorems, it is shown that the claim holds true for any generic flexible system that can be described in terms of smooth manifolds. The result suggests that, indeed, evolutionary solutions to the exquisite discrimination problem, if exist, are located near a codimension-1 subspace of the appropriate genotypical space. This constraint, in turn, gives rise to a potential for the allosteric regulation of the discrimination via generic conformational changes upon binding. |
1503.06522 | Simon Angus | Simon D. Angus and Jonathan Newton | Shared intentions and the advance of cumulative culture in
hunter-gatherers | 6 pages, 4 figures, 1 table, Supplementary Information not included | null | 10.1371/journal.pcbi.1004587 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It has been hypothesized that the evolution of modern human cognition was
catalyzed by the development of jointly intentional modes of behaviour. From an
early age (1-2 years), human infants outperform apes at tasks that involve
collaborative activity. Specifically, human infants excel at joint action
motivated by reasoning of the form "we will do X" (shared intentions), as
opposed to reasoning of the form "I will do X [because he is doing X]"
(individual intentions). The mechanism behind the evolution of shared
intentionality is unknown. Here we formally model the evolution of jointly
intentional action and show under what conditions it is likely to have emerged
in humans. Modelling the interaction of hunter-gatherers as a coordination
game, we find that when the benefits from adopting new technologies or norms
are low but positive, the sharing of intentions does not evolve, despite being
a mutualistic behaviour that directly benefits all participants. When the
benefits from adopting new technologies or norms are high, such as may be the
case during a period of rapid environmental change, shared intentionality
evolves and rapidly becomes dominant in the population. Our results shed new
light on the evolution of collaborative behaviours.
| [
{
"created": "Mon, 23 Mar 2015 04:10:11 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Mar 2015 01:05:48 GMT",
"version": "v2"
}
] | 2017-01-23 | [
[
"Angus",
"Simon D.",
""
],
[
"Newton",
"Jonathan",
""
]
] | It has been hypothesized that the evolution of modern human cognition was catalyzed by the development of jointly intentional modes of behaviour. From an early age (1-2 years), human infants outperform apes at tasks that involve collaborative activity. Specifically, human infants excel at joint action motivated by reasoning of the form "we will do X" (shared intentions), as opposed to reasoning of the form "I will do X [because he is doing X]" (individual intentions). The mechanism behind the evolution of shared intentionality is unknown. Here we formally model the evolution of jointly intentional action and show under what conditions it is likely to have emerged in humans. Modelling the interaction of hunter-gatherers as a coordination game, we find that when the benefits from adopting new technologies or norms are low but positive, the sharing of intentions does not evolve, despite being a mutualistic behaviour that directly benefits all participants. When the benefits from adopting new technologies or norms are high, such as may be the case during a period of rapid environmental change, shared intentionality evolves and rapidly becomes dominant in the population. Our results shed new light on the evolution of collaborative behaviours. |
1804.01487 | Carina Curto | Katherine Morrison and Carina Curto | Predicting neural network dynamics via graphical analysis | 29 pages, 19 figures. A book chapter for advanced undergraduates to
appear in "Algebraic and Combinatorial Computational Biology." R. Robeva, M.
Macaulay (Eds) 2018 | null | null | null | q-bio.NC math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural network models in neuroscience allow one to study how the connections
between neurons shape the activity of neural circuits in the brain. In this
chapter, we study Combinatorial Threshold-Linear Networks (CTLNs) in order to
understand how the pattern of connectivity, as encoded by a directed graph,
shapes the emergent nonlinear dynamics of the corresponding network. Important
aspects of these dynamics are controlled by the stable and unstable fixed
points of the network, and we show how these fixed points can be determined via
graph-based rules. We also present an algorithm for predicting sequences of
neural activation from the underlying directed graph, and examine the effect of
graph symmetries on a network's set of attractors.
| [
{
"created": "Wed, 4 Apr 2018 16:05:52 GMT",
"version": "v1"
}
] | 2018-04-05 | [
[
"Morrison",
"Katherine",
""
],
[
"Curto",
"Carina",
""
]
] | Neural network models in neuroscience allow one to study how the connections between neurons shape the activity of neural circuits in the brain. In this chapter, we study Combinatorial Threshold-Linear Networks (CTLNs) in order to understand how the pattern of connectivity, as encoded by a directed graph, shapes the emergent nonlinear dynamics of the corresponding network. Important aspects of these dynamics are controlled by the stable and unstable fixed points of the network, and we show how these fixed points can be determined via graph-based rules. We also present an algorithm for predicting sequences of neural activation from the underlying directed graph, and examine the effect of graph symmetries on a network's set of attractors. |
1901.00258 | Wenpo Yao | Yao Wen-po, Yao wen-li, Dai Jia-fei, Wang Jun | Time irreversibility and its application in epileptic brain electrical
activities | The results in the original paper might be misleading and need
further consideration. Therefore, we would like to withdraw this manuscript | null | null | null | q-bio.NC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time irreversibility (temporal asymmetry) is one of fundamental properties
that characterize the nonlinearity of complex dynamical processes, and our
brain is a typical complex dynamical system manifested with nonlinearity. Two
subtraction-based parameters, Ys and X2, are employed to measure the
probabilistic differences of permutations instead of raw vectors for the
simplified quantification of time irreversibility, which is validated by
chaotic and reversible processes and the surrogate data. We show that it is
equivalent to quantify time irreversibility by measuring probabilistic
difference between the forward and its backward processes and between the
symmetric permutations. And we detect time irreversibility of two groups of
epileptic EEGs, from the Nanjing General Hospital (NJGH) and from the public
Bonn epileptic database. In our contribution, NJGH epileptic EEGs during
seizure-free intervals of have lower time irreversibility than the control data
while those of the Bonn data sets have higher nonlinearity than the healthy
brain electrical activities. For the inconsistent results, we conduct
multi-scale analysis and elucidate from the circadian rhythms in epileptic
nonlinearity, however, more targeted researches are needed to verify our
assumptions or to determine if there are other reasons leading to the
inconsistency.
| [
{
"created": "Wed, 2 Jan 2019 03:55:20 GMT",
"version": "v1"
},
{
"created": "Sun, 8 Mar 2020 09:02:41 GMT",
"version": "v2"
}
] | 2020-03-10 | [
[
"Wen-po",
"Yao",
""
],
[
"wen-li",
"Yao",
""
],
[
"Jia-fei",
"Dai",
""
],
[
"Jun",
"Wang",
""
]
] | Time irreversibility (temporal asymmetry) is one of fundamental properties that characterize the nonlinearity of complex dynamical processes, and our brain is a typical complex dynamical system manifested with nonlinearity. Two subtraction-based parameters, Ys and X2, are employed to measure the probabilistic differences of permutations instead of raw vectors for the simplified quantification of time irreversibility, which is validated by chaotic and reversible processes and the surrogate data. We show that it is equivalent to quantify time irreversibility by measuring probabilistic difference between the forward and its backward processes and between the symmetric permutations. And we detect time irreversibility of two groups of epileptic EEGs, from the Nanjing General Hospital (NJGH) and from the public Bonn epileptic database. In our contribution, NJGH epileptic EEGs during seizure-free intervals of have lower time irreversibility than the control data while those of the Bonn data sets have higher nonlinearity than the healthy brain electrical activities. For the inconsistent results, we conduct multi-scale analysis and elucidate from the circadian rhythms in epileptic nonlinearity, however, more targeted researches are needed to verify our assumptions or to determine if there are other reasons leading to the inconsistency. |
2006.14800 | Randall O'Reilly | Randall C. O'Reilly, Jacob L. Russin, Maryam Zolfaghar, and John
Rohrlich | Deep Predictive Learning in Neocortex and Pulvinar | 56 pages, 22 figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | How do humans learn from raw sensory experience? Throughout life, but most
obviously in infancy, we learn without explicit instruction. We propose a
detailed biological mechanism for the widely-embraced idea that learning is
based on the differences between predictions and actual outcomes (i.e.,
predictive error-driven learning). Specifically, numerous weak projections into
the pulvinar nucleus of the thalamus generate top-down predictions, and sparse,
focal driver inputs from lower areas supply the actual outcome, originating in
layer 5 intrinsic bursting (5IB) neurons. Thus, the outcome is only briefly
activated, roughly every 100 msec (i.e., 10 Hz, alpha), resulting in a temporal
difference error signal, which drives local synaptic changes throughout the
neocortex, resulting in a biologically-plausible form of error backpropagation
learning. We implemented these mechanisms in a large-scale model of the visual
system, and found that the simulated inferotemporal (IT) pathway learns to
systematically categorize 3D objects according to invariant shape properties,
based solely on predictive learning from raw visual inputs. These categories
match human judgments on the same stimuli, and are consistent with neural
representations in IT cortex in primates.
| [
{
"created": "Fri, 26 Jun 2020 05:02:44 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Jan 2021 10:56:07 GMT",
"version": "v2"
}
] | 2021-01-29 | [
[
"O'Reilly",
"Randall C.",
""
],
[
"Russin",
"Jacob L.",
""
],
[
"Zolfaghar",
"Maryam",
""
],
[
"Rohrlich",
"John",
""
]
] | How do humans learn from raw sensory experience? Throughout life, but most obviously in infancy, we learn without explicit instruction. We propose a detailed biological mechanism for the widely-embraced idea that learning is based on the differences between predictions and actual outcomes (i.e., predictive error-driven learning). Specifically, numerous weak projections into the pulvinar nucleus of the thalamus generate top-down predictions, and sparse, focal driver inputs from lower areas supply the actual outcome, originating in layer 5 intrinsic bursting (5IB) neurons. Thus, the outcome is only briefly activated, roughly every 100 msec (i.e., 10 Hz, alpha), resulting in a temporal difference error signal, which drives local synaptic changes throughout the neocortex, resulting in a biologically-plausible form of error backpropagation learning. We implemented these mechanisms in a large-scale model of the visual system, and found that the simulated inferotemporal (IT) pathway learns to systematically categorize 3D objects according to invariant shape properties, based solely on predictive learning from raw visual inputs. These categories match human judgments on the same stimuli, and are consistent with neural representations in IT cortex in primates. |
1110.1611 | Ling Xue Ms | Ling Xue, Lee W. Cohnstaedt, H. Morgan Scott, Caterina Scoglio | A hierarchical network approach for modeling Rift Valley fever epidemics
with applications in North America | null | PLOS ONE 2013 | 10.1371/journal.pone.0062049 | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rift Valley fever is a vector-borne zoonotic disease which causes high
morbidity and mortality in livestock. In the event Rift Valley fever virus is
introduced to the United States or other non-endemic areas, understanding the
potential patterns of spread and the areas at risk based on disease vectors and
hosts will be vital for developing mitigation strategies. Presented here is a
general network-based mathematical model of Rift Valley fever. Given a lack of
empirical data on disease vector species and their vector competence, this
discrete time epidemic model uses stochastic parameters following several PERT
distributions to model the dynamic interactions between hosts and likely North
American mosquito vectors in dispersed geographic areas. Spatial effects and
climate factors are also addressed in the model. The model is applied to a
large directed asymmetric network of 3,621 nodes based on actual farms to
examine a hypothetical introduction to some counties of Texas, an important
ranching area in the United States of America (U.S.A.). The nodes of the
networks represent livestock farms, livestock markets, and feedlots, and the
links represent cattle movements and mosquito diffusion between different
nodes. Cattle and mosquito (Aedes and Culex) populations are treated with
different contact networks to assess virus propagation. Rift Valley fever virus
spread is assessed under various initial infection conditions (infected
mosquito eggs, adults or cattle). A surprising trend is fewer initial
infectious organisms result in a longer delay before a larger and more
prolonged outbreak. The delay is likely caused by a lack of herd immunity while
the infections expands geographically before becoming an epidemic involving
many dispersed farms and animals almost simultaneously.
| [
{
"created": "Fri, 7 Oct 2011 19:04:48 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Feb 2013 17:03:21 GMT",
"version": "v2"
}
] | 2013-03-27 | [
[
"Xue",
"Ling",
""
],
[
"Cohnstaedt",
"Lee W.",
""
],
[
"Scott",
"H. Morgan",
""
],
[
"Scoglio",
"Caterina",
""
]
] | Rift Valley fever is a vector-borne zoonotic disease which causes high morbidity and mortality in livestock. In the event Rift Valley fever virus is introduced to the United States or other non-endemic areas, understanding the potential patterns of spread and the areas at risk based on disease vectors and hosts will be vital for developing mitigation strategies. Presented here is a general network-based mathematical model of Rift Valley fever. Given a lack of empirical data on disease vector species and their vector competence, this discrete time epidemic model uses stochastic parameters following several PERT distributions to model the dynamic interactions between hosts and likely North American mosquito vectors in dispersed geographic areas. Spatial effects and climate factors are also addressed in the model. The model is applied to a large directed asymmetric network of 3,621 nodes based on actual farms to examine a hypothetical introduction to some counties of Texas, an important ranching area in the United States of America (U.S.A.). The nodes of the networks represent livestock farms, livestock markets, and feedlots, and the links represent cattle movements and mosquito diffusion between different nodes. Cattle and mosquito (Aedes and Culex) populations are treated with different contact networks to assess virus propagation. Rift Valley fever virus spread is assessed under various initial infection conditions (infected mosquito eggs, adults or cattle). A surprising trend is fewer initial infectious organisms result in a longer delay before a larger and more prolonged outbreak. The delay is likely caused by a lack of herd immunity while the infections expands geographically before becoming an epidemic involving many dispersed farms and animals almost simultaneously. |
2010.14068 | Navodini Wijethilake | Navodini Wijethilake, Mobarakol Islam, Dulani Meedeniya, Charith
Chitraranjan, Indika Perera, Hongliang Ren | Radiogenomics of Glioblastoma: Identification of Radiomics associated
with Molecular Subtypes | 2nd MICCAI workshop on Radiomics and Radiogenomics in Neuro-oncology
using AI, Springer, LNCS, (to appear) | null | null | null | q-bio.QM cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Glioblastoma is the most malignant type of central nervous system tumor with
GBM subtypes cleaved based on molecular level gene alterations. These
alterations are also happened to affect the histology. Thus, it can cause
visible changes in images, such as enhancement and edema development. In this
study, we extract intensity, volume, and texture features from the tumor
subregions to identify the correlations with gene expression features and
overall survival. Consequently, we utilize the radiomics to find associations
with the subtypes of glioblastoma. Accordingly, the fractal dimensions of the
whole tumor, tumor core, and necrosis regions show a significant difference
between the Proneural, Classical and Mesenchymal subtypes. Additionally, the
subtypes of GBM are predicted with an average accuracy of 79% utilizing
radiomics and accuracy over 90% utilizing gene expression profiles.
| [
{
"created": "Tue, 27 Oct 2020 05:31:56 GMT",
"version": "v1"
}
] | 2020-10-28 | [
[
"Wijethilake",
"Navodini",
""
],
[
"Islam",
"Mobarakol",
""
],
[
"Meedeniya",
"Dulani",
""
],
[
"Chitraranjan",
"Charith",
""
],
[
"Perera",
"Indika",
""
],
[
"Ren",
"Hongliang",
""
]
] | Glioblastoma is the most malignant type of central nervous system tumor with GBM subtypes cleaved based on molecular level gene alterations. These alterations are also happened to affect the histology. Thus, it can cause visible changes in images, such as enhancement and edema development. In this study, we extract intensity, volume, and texture features from the tumor subregions to identify the correlations with gene expression features and overall survival. Consequently, we utilize the radiomics to find associations with the subtypes of glioblastoma. Accordingly, the fractal dimensions of the whole tumor, tumor core, and necrosis regions show a significant difference between the Proneural, Classical and Mesenchymal subtypes. Additionally, the subtypes of GBM are predicted with an average accuracy of 79% utilizing radiomics and accuracy over 90% utilizing gene expression profiles. |
2109.12428 | Md. Kamrujjaman | Md. Shahriar Mahmud, Md. Kamrujjaman, Md. Mashih Ibn Yasin Adan, Md.
Alamgir Hossain, Md. Mizanur Rahman, Md. Shahidul Islam, Muhammad
Mohebujjaman and Md. Mamun Molla | Vaccine efficacy and SARS CoV 2 control in California and USA during the
session 2020 2026: A modeling study | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Besides maintaining health precautions, vaccination has been the only
prevention from SARS-CoV-2, though no clinically proved 100% effective vaccine
has been developed till date. At this stage, to withhold the debris of this
pandemic, experts need to know the impact of the vaccine efficacy rate's
threshold and how long this pandemic may extent with vaccines that have
different efficacy rates. In this article, a mathematical model study has been
done on the importance of vaccination and vaccine efficiency rate during an
ongoing pandemic. We simulated a five compartment mathematical model to analyze
the pandemic scenario in both California, and whole U.S. We considered four
vaccines, Pfizer, Moderna, AstraZeneca, and Johnson and Johnson, which are
being used rigorously to control the COVID-19 pandemic, in addition with two
special cases: a vaccine with 100% efficacy rate and no vaccine under use. Both
the infection and death rates are very high in California. Our model suggests
that the pandemic situation in California will be under control in the last
quartile of the year 2023 if frequent vaccination is continued with the Pfizer
vaccine. During this time, six waves will happen from the beginning of the
immunization where the case fatality and recovery rates will be 1.697% and
98.30%, respectively. However, according to the considered model, this period
might be extended to the mid of 2024 when vaccines with lower efficacy rates
are used. The more effective a vaccine, the less people suffer from this malign
infection. Although specific groups of people get prioritized initially, mass
vaccination is needed to control the spread of the disease.
| [
{
"created": "Wed, 15 Sep 2021 15:03:09 GMT",
"version": "v1"
}
] | 2021-09-28 | [
[
"Mahmud",
"Md. Shahriar",
""
],
[
"Kamrujjaman",
"Md.",
""
],
[
"Adan",
"Md. Mashih Ibn Yasin",
""
],
[
"Hossain",
"Md. Alamgir",
""
],
[
"Rahman",
"Md. Mizanur",
""
],
[
"Islam",
"Md. Shahidul",
""
],
[
"Mohebujjaman",
"Muhammad",
""
],
[
"Molla",
"Md. Mamun",
""
]
] | Besides maintaining health precautions, vaccination has been the only prevention from SARS-CoV-2, though no clinically proved 100% effective vaccine has been developed till date. At this stage, to withhold the debris of this pandemic, experts need to know the impact of the vaccine efficacy rate's threshold and how long this pandemic may extent with vaccines that have different efficacy rates. In this article, a mathematical model study has been done on the importance of vaccination and vaccine efficiency rate during an ongoing pandemic. We simulated a five compartment mathematical model to analyze the pandemic scenario in both California, and whole U.S. We considered four vaccines, Pfizer, Moderna, AstraZeneca, and Johnson and Johnson, which are being used rigorously to control the COVID-19 pandemic, in addition with two special cases: a vaccine with 100% efficacy rate and no vaccine under use. Both the infection and death rates are very high in California. Our model suggests that the pandemic situation in California will be under control in the last quartile of the year 2023 if frequent vaccination is continued with the Pfizer vaccine. During this time, six waves will happen from the beginning of the immunization where the case fatality and recovery rates will be 1.697% and 98.30%, respectively. However, according to the considered model, this period might be extended to the mid of 2024 when vaccines with lower efficacy rates are used. The more effective a vaccine, the less people suffer from this malign infection. Although specific groups of people get prioritized initially, mass vaccination is needed to control the spread of the disease. |
2007.13462 | Aaron Voelker | Aaron R. Voelker | A short letter on the dot product between rotated Fourier transforms | 4 pages, 3 figures | null | null | null | q-bio.NC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spatial Semantic Pointers (SSPs) have recently emerged as a powerful tool for
representing and transforming continuous space, with numerous applications to
cognitive modelling and deep learning. Fundamental to SSPs is the notion of
"similarity" between vectors representing different points in $n$-dimensional
space -- typically the dot product or cosine similarity between vectors with
rotated unit-length complex coefficients in the Fourier domain. The similarity
measure has previously been conjectured to be a Gaussian function of Euclidean
distance. Contrary to this conjecture, we derive a simple trigonometric formula
relating spatial displacement to similarity, and prove that, in the case where
the Fourier coefficients are uniform i.i.d., the expected similarity is a
product of normalized sinc functions: $\prod_{k=1}^{n} \operatorname{sinc}
\left( a_k \right)$, where $\mathbf{a} \in \mathbb{R}^n$ is the spatial
displacement between the two $n$-dimensional points. This establishes a direct
link between space and the similarity of SSPs, which in turn helps bolster a
useful mathematical framework for architecting neural networks that manipulate
spatial structures.
| [
{
"created": "Fri, 24 Jul 2020 13:53:04 GMT",
"version": "v1"
}
] | 2020-07-28 | [
[
"Voelker",
"Aaron R.",
""
]
] | Spatial Semantic Pointers (SSPs) have recently emerged as a powerful tool for representing and transforming continuous space, with numerous applications to cognitive modelling and deep learning. Fundamental to SSPs is the notion of "similarity" between vectors representing different points in $n$-dimensional space -- typically the dot product or cosine similarity between vectors with rotated unit-length complex coefficients in the Fourier domain. The similarity measure has previously been conjectured to be a Gaussian function of Euclidean distance. Contrary to this conjecture, we derive a simple trigonometric formula relating spatial displacement to similarity, and prove that, in the case where the Fourier coefficients are uniform i.i.d., the expected similarity is a product of normalized sinc functions: $\prod_{k=1}^{n} \operatorname{sinc} \left( a_k \right)$, where $\mathbf{a} \in \mathbb{R}^n$ is the spatial displacement between the two $n$-dimensional points. This establishes a direct link between space and the similarity of SSPs, which in turn helps bolster a useful mathematical framework for architecting neural networks that manipulate spatial structures. |
2405.00530 | Vladimir Reukov | Seyedali Mirmohammadsadeghi, Davis Juhas, Mikhail Parker, Kristina
Peranidze, Dwight Austin Van Horn, Aayushi Sharma, Dhruvi Patel, Tatyana A.
Sysoeva, Vladislav Klepov, and Vladimir Reukov | The Highly Durable Antibacterial Gel-like Coatings for Textiles | null | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hospital-acquired infections are considered a priority for public health
systems, which poses a significant burden for society. High-touch surfaces of
healthcare centers, including textiles, provide a suitable environment for
pathogenic bacteria to grow, necessitating incorporating effective
antibacterial agents into textiles. This paper introduces a highly durable
antibacterial gel-like solution, Silver Shell finish, which contains
chitosan-bound silver chloride microparticles. The study investigates the
coating's environmental impact, health risks, and durability during repeated
washing. The structure of the Silver Shell finish was studied using
Transmission Electron Microscopy (TEM) and Energy-Dispersive X-ray Spectroscopy
(EDX). TEM images showed a core-shell structure, with chitosan forming a
protective shell around groupings of silver micro-particles. Field Emission
Scanning Electron Microscopy (FESEM) demonstrated the uniform deposition of
Silver Shell on the surface of fabrics. AATCC Test Method 100 was employed to
quantitatively analyze the antibacterial properties of fabrics coated with
silver microparticles. Two types of bacteria, Staphylococcus aureus (S. aureus)
and Escherichia coli (E. coli) were used in this study. The antibacterial
results showed that after 75 wash cycles, a 100% reduction for both S. aureus
and E. coli in the coated samples using crosslinking agents was observed. The
coated samples without a crosslinking agent exhibited a 99.88% and 99.81%
reduction for S. aureus and E. coli after 50 washing cycles. AATCC-147 was
performed to investigate the coated samples' leaching properties and the
crosslinking agent's effect against S. aureus and E. coli. All coated samples
demonstrated remarkable antibacterial efficacy even after 75 wash cycles.
| [
{
"created": "Wed, 1 May 2024 14:02:08 GMT",
"version": "v1"
}
] | 2024-05-02 | [
[
"Mirmohammadsadeghi",
"Seyedali",
""
],
[
"Juhas",
"Davis",
""
],
[
"Parker",
"Mikhail",
""
],
[
"Peranidze",
"Kristina",
""
],
[
"Van Horn",
"Dwight Austin",
""
],
[
"Sharma",
"Aayushi",
""
],
[
"Patel",
"Dhruvi",
""
],
[
"Sysoeva",
"Tatyana A.",
""
],
[
"Klepov",
"Vladislav",
""
],
[
"Reukov",
"Vladimir",
""
]
] | Hospital-acquired infections are considered a priority for public health systems, which poses a significant burden for society. High-touch surfaces of healthcare centers, including textiles, provide a suitable environment for pathogenic bacteria to grow, necessitating incorporating effective antibacterial agents into textiles. This paper introduces a highly durable antibacterial gel-like solution, Silver Shell finish, which contains chitosan-bound silver chloride microparticles. The study investigates the coating's environmental impact, health risks, and durability during repeated washing. The structure of the Silver Shell finish was studied using Transmission Electron Microscopy (TEM) and Energy-Dispersive X-ray Spectroscopy (EDX). TEM images showed a core-shell structure, with chitosan forming a protective shell around groupings of silver micro-particles. Field Emission Scanning Electron Microscopy (FESEM) demonstrated the uniform deposition of Silver Shell on the surface of fabrics. AATCC Test Method 100 was employed to quantitatively analyze the antibacterial properties of fabrics coated with silver microparticles. Two types of bacteria, Staphylococcus aureus (S. aureus) and Escherichia coli (E. coli) were used in this study. The antibacterial results showed that after 75 wash cycles, a 100% reduction for both S. aureus and E. coli in the coated samples using crosslinking agents was observed. The coated samples without a crosslinking agent exhibited a 99.88% and 99.81% reduction for S. aureus and E. coli after 50 washing cycles. AATCC-147 was performed to investigate the coated samples' leaching properties and the crosslinking agent's effect against S. aureus and E. coli. All coated samples demonstrated remarkable antibacterial efficacy even after 75 wash cycles. |
1204.6539 | Michael Small | Xiumin Li and Michael Small | Neuronal avalanches of a self-organized neural network with
active-neuron-dominant structure | Non-final version submitted to Chaos | Chaos 22, 023104 (2012) | 10.1063/1.3701946 | null | q-bio.NC nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neuronal avalanche is a spontaneous neuronal activity which obeys a power-law
distribution of population event sizes with an exponent of -3/2. It has been
observed in the superficial layers of cortex both \emph{in vivo} and \emph{in
vitro}. In this paper we analyze the information transmission of a novel
self-organized neural network with active-neuron-dominant structure. Neuronal
avalanches can be observed in this network with appropriate input intensity. We
find that the process of network learning via spike-timing dependent plasticity
dramatically increases the complexity of network structure, which is finally
self-organized to be active-neuron-dominant connectivity. Both the entropy of
activity patterns and the complexity of their resulting post-synaptic inputs
are maximized when the network dynamics are propagated as neuronal avalanches.
This emergent topology is beneficial for information transmission with high
efficiency and also could be responsible for the large information capacity of
this network compared with alternative archetypal networks with different
neural connectivity.
| [
{
"created": "Mon, 30 Apr 2012 03:01:25 GMT",
"version": "v1"
}
] | 2015-06-04 | [
[
"Li",
"Xiumin",
""
],
[
"Small",
"Michael",
""
]
] | Neuronal avalanche is a spontaneous neuronal activity which obeys a power-law distribution of population event sizes with an exponent of -3/2. It has been observed in the superficial layers of cortex both \emph{in vivo} and \emph{in vitro}. In this paper we analyze the information transmission of a novel self-organized neural network with active-neuron-dominant structure. Neuronal avalanches can be observed in this network with appropriate input intensity. We find that the process of network learning via spike-timing dependent plasticity dramatically increases the complexity of network structure, which is finally self-organized to be active-neuron-dominant connectivity. Both the entropy of activity patterns and the complexity of their resulting post-synaptic inputs are maximized when the network dynamics are propagated as neuronal avalanches. This emergent topology is beneficial for information transmission with high efficiency and also could be responsible for the large information capacity of this network compared with alternative archetypal networks with different neural connectivity. |
1912.08755 | Marc-Andre Schulz | Marc-Andre Schulz, Matt Chapman-Rounds, Manisha Verma, Danilo Bzdok,
Konstantinos Georgatzis | Clusters in Explanation Space: Inferring disease subtypes from model
explanations | null | null | null | null | q-bio.QM cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Identification of disease subtypes and corresponding biomarkers can
substantially improve clinical diagnosis and treatment selection. Discovering
these subtypes in noisy, high dimensional biomedical data is often impossible
for humans and challenging for machines. We introduce a new approach to
facilitate the discovery of disease subtypes: Instead of analyzing the original
data, we train a diagnostic classifier (healthy vs. diseased) and extract
instance-wise explanations for the classifier's decisions. The distribution of
instances in the explanation space of our diagnostic classifier amplifies the
different reasons for belonging to the same class - resulting in a
representation that is uniquely useful for discovering latent subtypes. We
compare our ability to recover subtypes via cluster analysis on model
explanations to classical cluster analysis on the original data. In multiple
datasets with known ground-truth subclasses, most compellingly on UK Biobank
brain imaging data and transcriptome data from the Cancer Genome Atlas, we show
that cluster analysis on model explanations substantially outperforms the
classical approach. While we believe clustering in explanation space to be
particularly valuable for inferring disease subtypes, the method is more
general and applicable to any kind of sub-type identification.
| [
{
"created": "Wed, 18 Dec 2019 17:39:56 GMT",
"version": "v1"
},
{
"created": "Thu, 14 May 2020 23:05:20 GMT",
"version": "v2"
}
] | 2020-05-18 | [
[
"Schulz",
"Marc-Andre",
""
],
[
"Chapman-Rounds",
"Matt",
""
],
[
"Verma",
"Manisha",
""
],
[
"Bzdok",
"Danilo",
""
],
[
"Georgatzis",
"Konstantinos",
""
]
] | Identification of disease subtypes and corresponding biomarkers can substantially improve clinical diagnosis and treatment selection. Discovering these subtypes in noisy, high dimensional biomedical data is often impossible for humans and challenging for machines. We introduce a new approach to facilitate the discovery of disease subtypes: Instead of analyzing the original data, we train a diagnostic classifier (healthy vs. diseased) and extract instance-wise explanations for the classifier's decisions. The distribution of instances in the explanation space of our diagnostic classifier amplifies the different reasons for belonging to the same class - resulting in a representation that is uniquely useful for discovering latent subtypes. We compare our ability to recover subtypes via cluster analysis on model explanations to classical cluster analysis on the original data. In multiple datasets with known ground-truth subclasses, most compellingly on UK Biobank brain imaging data and transcriptome data from the Cancer Genome Atlas, we show that cluster analysis on model explanations substantially outperforms the classical approach. While we believe clustering in explanation space to be particularly valuable for inferring disease subtypes, the method is more general and applicable to any kind of sub-type identification. |
1503.06583 | Petter Holme | Petter Holme | Information content of contact-pattern representations and
predictability of epidemic outbreaks | Supplementary material not included | Sci. Rep. 5, 14462 (2015) | 10.1038/srep14462 | null | q-bio.PE cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To understand the contact patterns of a population -- who is in contact with
whom, and when the contacts happen -- is crucial for modeling outbreaks of
infectious disease. Traditional theoretical epidemiology assumes that any
individual can meet any with equal probability. A more modern approach, network
epidemiology, assumes people are connected into a static network over which the
disease spreads. Newer yet, temporal network epidemiology, includes the time in
the contact representations. In this paper, we investigate the effect of these
successive inclusions of more information. Using empirical proximity data, we
study both outbreak sizes from unknown sources, and from known states of
ongoing outbreaks. In the first case, there are large differences going from a
fully mixed simulation to a network, and from a network to a temporal network.
In the second case, differences are smaller. We interpret these observations in
terms of the temporal network structure of the data sets. For example, a fast
overturn of nodes and links seem to make the temporal information more
important.
| [
{
"created": "Mon, 23 Mar 2015 10:11:50 GMT",
"version": "v1"
}
] | 2015-10-22 | [
[
"Holme",
"Petter",
""
]
] | To understand the contact patterns of a population -- who is in contact with whom, and when the contacts happen -- is crucial for modeling outbreaks of infectious disease. Traditional theoretical epidemiology assumes that any individual can meet any with equal probability. A more modern approach, network epidemiology, assumes people are connected into a static network over which the disease spreads. Newer yet, temporal network epidemiology, includes the time in the contact representations. In this paper, we investigate the effect of these successive inclusions of more information. Using empirical proximity data, we study both outbreak sizes from unknown sources, and from known states of ongoing outbreaks. In the first case, there are large differences going from a fully mixed simulation to a network, and from a network to a temporal network. In the second case, differences are smaller. We interpret these observations in terms of the temporal network structure of the data sets. For example, a fast overturn of nodes and links seem to make the temporal information more important. |
1303.2170 | John Capra | John A. Capra, Melissa J. Hubisz, Dennis Kostka, Katherine S. Pollard
and Adam Siepel | A Model-Based Analysis of GC-Biased Gene Conversion in the Human and
Chimpanzee Genomes | 40 pages, 17 figures | null | null | null | q-bio.GN q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | GC-biased gene conversion (gBGC) is a recombination-associated process that
favors the fixation of G/C alleles over A/T alleles. In mammals, gBGC is
hypothesized to contribute to variation in GC content, rapidly evolving
sequences, and the fixation of deleterious mutations, but its prevalence and
general functional consequences remain poorly understood. gBGC is difficult to
incorporate into models of molecular evolution and so far has primarily been
studied using summary statistics from genomic comparisons. Here, we introduce a
new probabilistic model that captures the joint effects of natural selection
and gBGC on nucleotide substitution patterns, while allowing for correlations
along the genome in these effects. We implemented our model in a computer
program, called phastBias, that can accurately detect gBGC tracts ~1 kilobase
or longer in simulated sequence alignments. When applied to real primate genome
sequences, phastBias predicts gBGC tracts that cover roughly 0.3% of the human
and chimpanzee genomes and account for 1.2% of human-chimpanzee nucleotide
differences. These tracts fall in clusters, particularly in subtelomeric
regions; they are enriched for recombination hotspots and fast-evolving
sequences; and they display an ongoing fixation preference for G and C alleles.
We also find some evidence that they contribute to the fixation of deleterious
alleles, including an enrichment for disease-associated polymorphisms. These
tracts provide a unique window into historical recombination processes along
the human and chimpanzee lineages; they supply additional evidence of long-term
conservation of megabase-scale recombination rates accompanied by rapid
turnover of hotspots. Together, these findings shed new light on the
evolutionary, functional, and disease implications of gBGC. The phastBias
program and our predicted tracts are freely available.
| [
{
"created": "Sat, 9 Mar 2013 05:27:07 GMT",
"version": "v1"
}
] | 2013-03-12 | [
[
"Capra",
"John A.",
""
],
[
"Hubisz",
"Melissa J.",
""
],
[
"Kostka",
"Dennis",
""
],
[
"Pollard",
"Katherine S.",
""
],
[
"Siepel",
"Adam",
""
]
] | GC-biased gene conversion (gBGC) is a recombination-associated process that favors the fixation of G/C alleles over A/T alleles. In mammals, gBGC is hypothesized to contribute to variation in GC content, rapidly evolving sequences, and the fixation of deleterious mutations, but its prevalence and general functional consequences remain poorly understood. gBGC is difficult to incorporate into models of molecular evolution and so far has primarily been studied using summary statistics from genomic comparisons. Here, we introduce a new probabilistic model that captures the joint effects of natural selection and gBGC on nucleotide substitution patterns, while allowing for correlations along the genome in these effects. We implemented our model in a computer program, called phastBias, that can accurately detect gBGC tracts ~1 kilobase or longer in simulated sequence alignments. When applied to real primate genome sequences, phastBias predicts gBGC tracts that cover roughly 0.3% of the human and chimpanzee genomes and account for 1.2% of human-chimpanzee nucleotide differences. These tracts fall in clusters, particularly in subtelomeric regions; they are enriched for recombination hotspots and fast-evolving sequences; and they display an ongoing fixation preference for G and C alleles. We also find some evidence that they contribute to the fixation of deleterious alleles, including an enrichment for disease-associated polymorphisms. These tracts provide a unique window into historical recombination processes along the human and chimpanzee lineages; they supply additional evidence of long-term conservation of megabase-scale recombination rates accompanied by rapid turnover of hotspots. Together, these findings shed new light on the evolutionary, functional, and disease implications of gBGC. The phastBias program and our predicted tracts are freely available. |
1309.5211 | Nico Goernitz | Georg Zeller, Nico Goernitz, Andre Kahles, Jonas Behr, Pramod
Mudrakarta, Soeren Sonnenburg, Gunnar Raetsch | mTim: Rapid and accurate transcript reconstruction from RNA-Seq data | null | null | null | null | q-bio.GN stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in high-throughput cDNA sequencing (RNA-Seq) technology have
revolutionized transcriptome studies. A major motivation for RNA-Seq is to map
the structure of expressed transcripts at nucleotide resolution. With accurate
computational tools for transcript reconstruction, this technology may also
become useful for genome (re-)annotation, which has mostly relied on de novo
gene finding where gene structures are primarily inferred from the genome
sequence. We developed a machine-learning method, called mTim (margin-based
transcript inference method) for transcript reconstruction from RNA-Seq read
alignments that is based on discriminatively trained hidden Markov support
vector machines. In addition to features derived from read alignments, it
utilizes characteristic genomic sequences, e.g. around splice sites, to improve
transcript predictions. mTim inferred transcripts that were highly accurate and
relatively robust to alignment errors in comparison to those from Cufflinks, a
widely used transcript assembly method.
| [
{
"created": "Fri, 20 Sep 2013 08:53:52 GMT",
"version": "v1"
}
] | 2013-09-23 | [
[
"Zeller",
"Georg",
""
],
[
"Goernitz",
"Nico",
""
],
[
"Kahles",
"Andre",
""
],
[
"Behr",
"Jonas",
""
],
[
"Mudrakarta",
"Pramod",
""
],
[
"Sonnenburg",
"Soeren",
""
],
[
"Raetsch",
"Gunnar",
""
]
] | Recent advances in high-throughput cDNA sequencing (RNA-Seq) technology have revolutionized transcriptome studies. A major motivation for RNA-Seq is to map the structure of expressed transcripts at nucleotide resolution. With accurate computational tools for transcript reconstruction, this technology may also become useful for genome (re-)annotation, which has mostly relied on de novo gene finding where gene structures are primarily inferred from the genome sequence. We developed a machine-learning method, called mTim (margin-based transcript inference method) for transcript reconstruction from RNA-Seq read alignments that is based on discriminatively trained hidden Markov support vector machines. In addition to features derived from read alignments, it utilizes characteristic genomic sequences, e.g. around splice sites, to improve transcript predictions. mTim inferred transcripts that were highly accurate and relatively robust to alignment errors in comparison to those from Cufflinks, a widely used transcript assembly method. |
1903.09628 | Artem Novozhilov | Ivan Yegorov, Artem S. Novozhilov, and Alexander S. Bratus | Open Quasispecies Models: Stability, Optimization, and Distributed
Extension | 33 pages | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We suggest a natural approach that leads to a modification of classical
quasispecies models and incorporates the possibility of population extinction
in addition to growth. The resulting modified models are called open. Their
essential properties, regarding in particular equilibrium behavior, are
investigated both analytically and numerically. The hallmarks of the
quasispecies dynamics, viz. the heterogeneous quasispecies distribution itself
and the error threshold phenomenon, can be observed in our models, along with
extinction. In order to demonstrate the flexibility of the introduced
framework, we study the inverse problem of fitness allocation under the
biologically motivated criterion of steady-state fitness maximization. Having
in mind the complexity of numerical investigation of high-dimensional
quasispecies problems and the fact that the actual number of genotypes or
alleles involved in a studied process can be extremely large, we also build
continuous-time distributed open quasispecies models. The obtained results may
serve as an initial step to developing mathematical models that involve
directed therapy against various pathogens.
| [
{
"created": "Fri, 22 Mar 2019 17:50:20 GMT",
"version": "v1"
}
] | 2019-03-25 | [
[
"Yegorov",
"Ivan",
""
],
[
"Novozhilov",
"Artem S.",
""
],
[
"Bratus",
"Alexander S.",
""
]
] | We suggest a natural approach that leads to a modification of classical quasispecies models and incorporates the possibility of population extinction in addition to growth. The resulting modified models are called open. Their essential properties, regarding in particular equilibrium behavior, are investigated both analytically and numerically. The hallmarks of the quasispecies dynamics, viz. the heterogeneous quasispecies distribution itself and the error threshold phenomenon, can be observed in our models, along with extinction. In order to demonstrate the flexibility of the introduced framework, we study the inverse problem of fitness allocation under the biologically motivated criterion of steady-state fitness maximization. Having in mind the complexity of numerical investigation of high-dimensional quasispecies problems and the fact that the actual number of genotypes or alleles involved in a studied process can be extremely large, we also build continuous-time distributed open quasispecies models. The obtained results may serve as an initial step to developing mathematical models that involve directed therapy against various pathogens. |
2111.06999 | Ian Leifer | Paolo Boldi, Ian Leifer, Hern\'an A. Makse | Quasifibrations of Graphs to Find Symmetries in Biological Networks | null | null | 10.1088/1742-5468/ac99d1 | null | q-bio.QM cs.IT math.IT math.OC physics.bio-ph physics.data-an | http://creativecommons.org/licenses/by/4.0/ | A fibration of graphs is an homomorphism that is a local isomorphism of
in-neighbourhoods, much in the same way a covering projection is a local
isomorphism of neighbourhoods. Recently, it has been shown that graph
fibrations are useful tools to uncover symmetries and synchronization patterns
in biological networks ranging from gene, protein,and metabolic networks to the
brain. However, the inherent incompleteness and disordered nature of biological
data precludes the application of the definition of fibration as it is; as a
consequence, also the currently known algorithms to identify fibrations fail in
these domains. In this paper, we introduce and develop systematically the
theory of quasifibrations which attempts to capture more realistic patterns of
almost-synchronization of units in biological networks. We provide an
algorithmic solution to the problem of finding quasifibrations in networks
where the existence of missing links and variability across samples preclude
the identification of perfect symmetries in the connectivity structure. We test
the algorithm against other strategies to repair missing links in incomplete
networks using real connectome data and synthetic networks. Quasifibrations can
be applied to reconstruct any incomplete network structure characterized by
underlying symmetries and almost synchronized clusters.
| [
{
"created": "Sat, 13 Nov 2021 00:16:01 GMT",
"version": "v1"
}
] | 2022-11-23 | [
[
"Boldi",
"Paolo",
""
],
[
"Leifer",
"Ian",
""
],
[
"Makse",
"Hernán A.",
""
]
] | A fibration of graphs is an homomorphism that is a local isomorphism of in-neighbourhoods, much in the same way a covering projection is a local isomorphism of neighbourhoods. Recently, it has been shown that graph fibrations are useful tools to uncover symmetries and synchronization patterns in biological networks ranging from gene, protein,and metabolic networks to the brain. However, the inherent incompleteness and disordered nature of biological data precludes the application of the definition of fibration as it is; as a consequence, also the currently known algorithms to identify fibrations fail in these domains. In this paper, we introduce and develop systematically the theory of quasifibrations which attempts to capture more realistic patterns of almost-synchronization of units in biological networks. We provide an algorithmic solution to the problem of finding quasifibrations in networks where the existence of missing links and variability across samples preclude the identification of perfect symmetries in the connectivity structure. We test the algorithm against other strategies to repair missing links in incomplete networks using real connectome data and synthetic networks. Quasifibrations can be applied to reconstruct any incomplete network structure characterized by underlying symmetries and almost synchronized clusters. |
1603.02916 | Zachary Roth | Zachary Roth | Analysis of neuronal sequences using pairwise biases | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sequences of neuronal activation have long been implicated in a variety of
brain functions. In particular, these sequences have been tied to memory
formation and spatial navigation in the hippocampus, a region of mammalian
brains. Traditionally, neuronal sequences have been interpreted as noisy
manifestations of neuronal templates (i.e., orderings), ignoring much richer
structure contained in the sequences. This paper introduces a new tool for
understanding neuronal sequences: the bias matrix. The bias matrix captures the
probabilistic tendency of each neuron to fire before or after each other
neuron. Despite considering only pairs of neurons, the bias matrix captures the
best total ordering of neurons for a sequence (Proposition 3.3) and, thus,
generalizes the concept of a neuronal template.
We establish basic mathematical properties of bias matrices, in particular
describing the fundamental polytope in which all biases reside (Theorem 3.25).
We show how the underlying simple digraph of a bias matrix, which we term the
bias network, serves as a combinatorial generalization of neuronal templates.
Surprisingly, every simple digraph is realizable as the bias network of some
sequence (Theorem 3.34). The bias-matrix representation leads us to a natural
method for sequence correlation, which then leads to a probabilistic framework
for determining the similarity of one set of sequences to another. Using data
from rat hippocampus, we describe events of interest and also
sequence-detection techniques. Finally, the bias matrix and the similarity
measure are applied to this real-world data using code developed by us.
| [
{
"created": "Wed, 9 Mar 2016 15:18:11 GMT",
"version": "v1"
}
] | 2016-03-10 | [
[
"Roth",
"Zachary",
""
]
] | Sequences of neuronal activation have long been implicated in a variety of brain functions. In particular, these sequences have been tied to memory formation and spatial navigation in the hippocampus, a region of mammalian brains. Traditionally, neuronal sequences have been interpreted as noisy manifestations of neuronal templates (i.e., orderings), ignoring much richer structure contained in the sequences. This paper introduces a new tool for understanding neuronal sequences: the bias matrix. The bias matrix captures the probabilistic tendency of each neuron to fire before or after each other neuron. Despite considering only pairs of neurons, the bias matrix captures the best total ordering of neurons for a sequence (Proposition 3.3) and, thus, generalizes the concept of a neuronal template. We establish basic mathematical properties of bias matrices, in particular describing the fundamental polytope in which all biases reside (Theorem 3.25). We show how the underlying simple digraph of a bias matrix, which we term the bias network, serves as a combinatorial generalization of neuronal templates. Surprisingly, every simple digraph is realizable as the bias network of some sequence (Theorem 3.34). The bias-matrix representation leads us to a natural method for sequence correlation, which then leads to a probabilistic framework for determining the similarity of one set of sequences to another. Using data from rat hippocampus, we describe events of interest and also sequence-detection techniques. Finally, the bias matrix and the similarity measure are applied to this real-world data using code developed by us. |
1010.0904 | Jaewook Joo | Jaewook Joo, Steven J. Plimpton, Jean-Loup Faulon | Novel statistical ensemble analysis for simulating extrinsic
noise-driven response in NF-{\kappa}B signaling network | 39 pages, 12 figures | BMC Systems Biology 7(1):45 (2013) | 10.1186/1752-0509-7-45 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cellular responses in the single cells are known to be highly heterogeneous
and individualistic due to the strong influence by extrinsic and intrinsic
noise. Here, we are concerned about how to model the extrinsic noise-induced
heterogeneous response in the single cells under the constraints of
experimentally obtained population-averaged response, but without much detailed
kinetic information. We propose a novel statistical ensemble scheme where
extrinsic noise is regarded as fluctuations in the values of kinetic parameters
and such fluctuations are modeled by randomly sampling the kinetic rate
constants from a uniform distribution. We consider a large number of signaling
system replicates, each of which has the same network topology, but a uniquely
different set of kinetic rate constants. A protein dynamic response from each
replicate should represent the dynamics in a single cell and the statistical
ensemble average should be regarded as a population-level response averaged
over a population of the cells. We devise an optimization algorithm to find the
correct uniform distribution of the network parameters, which produces the
correct statistical distribution of the response whose ensemble average and
distribution agree well with the population-level experimental data and the
experimentally observed heterogeneity. We apply this statistical ensemble
analysis to a NF-{\kappa}B signaling system and (1) predict the distributions
of the heterogeneous NF-{\kappa}B (either oscillatory or non-oscillatory)
dynamic patterns and of the dynamic features (e.g., period), (2) predict that
both the distribution and the statistical ensemble average of the NF-{\kappa}B
dynamic response depends sensitively on the dosage of stimulant, and lastly (3)
demonstrate the sigmoidally shaped dose-response from the statistical ensemble
average and the individual replicates.
| [
{
"created": "Tue, 5 Oct 2010 14:53:36 GMT",
"version": "v1"
}
] | 2016-10-31 | [
[
"Joo",
"Jaewook",
""
],
[
"Plimpton",
"Steven J.",
""
],
[
"Faulon",
"Jean-Loup",
""
]
] | Cellular responses in the single cells are known to be highly heterogeneous and individualistic due to the strong influence by extrinsic and intrinsic noise. Here, we are concerned about how to model the extrinsic noise-induced heterogeneous response in the single cells under the constraints of experimentally obtained population-averaged response, but without much detailed kinetic information. We propose a novel statistical ensemble scheme where extrinsic noise is regarded as fluctuations in the values of kinetic parameters and such fluctuations are modeled by randomly sampling the kinetic rate constants from a uniform distribution. We consider a large number of signaling system replicates, each of which has the same network topology, but a uniquely different set of kinetic rate constants. A protein dynamic response from each replicate should represent the dynamics in a single cell and the statistical ensemble average should be regarded as a population-level response averaged over a population of the cells. We devise an optimization algorithm to find the correct uniform distribution of the network parameters, which produces the correct statistical distribution of the response whose ensemble average and distribution agree well with the population-level experimental data and the experimentally observed heterogeneity. We apply this statistical ensemble analysis to a NF-{\kappa}B signaling system and (1) predict the distributions of the heterogeneous NF-{\kappa}B (either oscillatory or non-oscillatory) dynamic patterns and of the dynamic features (e.g., period), (2) predict that both the distribution and the statistical ensemble average of the NF-{\kappa}B dynamic response depends sensitively on the dosage of stimulant, and lastly (3) demonstrate the sigmoidally shaped dose-response from the statistical ensemble average and the individual replicates. |
1910.14650 | Duc Nguyen | Christopher Grow, Kaifu Gao, Duc Duy Nguyen, and Guo-Wei Wei | Generative network complex (GNC) for drug discovery | 22 pages, 12 figures | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It remains a challenging task to generate a vast variety of novel compounds
with desirable pharmacological properties. In this work, a generative network
complex (GNC) is proposed as a new platform for designing novel compounds,
predicting their physical and chemical properties, and selecting potential drug
candidates that fulfill various druggable criteria such as binding affinity,
solubility, partition coefficient, etc. We combine a SMILES string generator,
which consists of an encoder, a drug-property controlled or regulated latent
space, and a decoder, with verification deep neural networks, a target-specific
three-dimensional (3D) pose generator, and mathematical deep learning networks
to generate new compounds, predict their drug properties, construct 3D poses
associated with target proteins, and reevaluate druggability, respectively. New
compounds were generated in the latent space by either randomized output,
controlled output, or optimized output. In our demonstration, 2.08 million and
2.8 million novel compounds are generated respectively for Cathepsin S and BACE
targets. These new compounds are very different from the seeds and cover a
larger chemical space. For potentially active compounds, their 3D poses are
generated using a state-of-the-art method. The resulting 3D complexes are
further evaluated for druggability by a championing deep learning algorithm
based on algebraic topology, differential geometry, and algebraic graph
theories. Performed on supercomputers, the whole process took less than one
week. Therefore, our GNC is an efficient new paradigm for discovering new drug
candidates.
| [
{
"created": "Thu, 31 Oct 2019 17:38:30 GMT",
"version": "v1"
}
] | 2019-11-01 | [
[
"Grow",
"Christopher",
""
],
[
"Gao",
"Kaifu",
""
],
[
"Nguyen",
"Duc Duy",
""
],
[
"Wei",
"Guo-Wei",
""
]
] | It remains a challenging task to generate a vast variety of novel compounds with desirable pharmacological properties. In this work, a generative network complex (GNC) is proposed as a new platform for designing novel compounds, predicting their physical and chemical properties, and selecting potential drug candidates that fulfill various druggable criteria such as binding affinity, solubility, partition coefficient, etc. We combine a SMILES string generator, which consists of an encoder, a drug-property controlled or regulated latent space, and a decoder, with verification deep neural networks, a target-specific three-dimensional (3D) pose generator, and mathematical deep learning networks to generate new compounds, predict their drug properties, construct 3D poses associated with target proteins, and reevaluate druggability, respectively. New compounds were generated in the latent space by either randomized output, controlled output, or optimized output. In our demonstration, 2.08 million and 2.8 million novel compounds are generated respectively for Cathepsin S and BACE targets. These new compounds are very different from the seeds and cover a larger chemical space. For potentially active compounds, their 3D poses are generated using a state-of-the-art method. The resulting 3D complexes are further evaluated for druggability by a championing deep learning algorithm based on algebraic topology, differential geometry, and algebraic graph theories. Performed on supercomputers, the whole process took less than one week. Therefore, our GNC is an efficient new paradigm for discovering new drug candidates. |
2005.14127 | Marcus Aguiar de | Vitor M. Marquioni and Marcus A.M. de Aguiar | Quantifying the effects of quarantine using an IBM SEIR model on
scalefree networks | 14 pages, 6 figures | Chaos, Solitons & Fractals, Vol. 138, 2020, 109999 | 10.1016/j.chaos.2020.109999 | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The COVID-19 pandemic led several countries to resort to social distancing,
the only known way to slow down the spread of the virus and keep the health
system under control. Here we use an individual based model (IBM) to study how
the duration, start date and intensity of quarantine affect the height and
position of the peak of the infection curve. We show that stochastic effects,
inherent to the model dynamics, lead to variable outcomes for the same set of
parameters, making it crucial to compute the probability of each result. To
simplify the analysis we divide the outcomes in only two categories, that we
call {best and worst scenarios. Although long and intense quarantine is the
best way to end the epidemic, it is very hard to implement in practice. Here we
show that relatively short and intense quarantine periods can also be very
effective in flattening the infection curve and even killing the virus, but the
likelihood of such outcomes are low. Long quarantines of relatively low
intensity, on the other hand, can delay the infection peak and reduce its size
considerably with more than 50% probability, being a more effective policy than
complete lockdown for short periods.
| [
{
"created": "Thu, 28 May 2020 16:23:11 GMT",
"version": "v1"
}
] | 2020-08-18 | [
[
"Marquioni",
"Vitor M.",
""
],
[
"de Aguiar",
"Marcus A. M.",
""
]
] | The COVID-19 pandemic led several countries to resort to social distancing, the only known way to slow down the spread of the virus and keep the health system under control. Here we use an individual based model (IBM) to study how the duration, start date and intensity of quarantine affect the height and position of the peak of the infection curve. We show that stochastic effects, inherent to the model dynamics, lead to variable outcomes for the same set of parameters, making it crucial to compute the probability of each result. To simplify the analysis we divide the outcomes in only two categories, that we call {best and worst scenarios. Although long and intense quarantine is the best way to end the epidemic, it is very hard to implement in practice. Here we show that relatively short and intense quarantine periods can also be very effective in flattening the infection curve and even killing the virus, but the likelihood of such outcomes are low. Long quarantines of relatively low intensity, on the other hand, can delay the infection peak and reduce its size considerably with more than 50% probability, being a more effective policy than complete lockdown for short periods. |
1202.6027 | Michael Raghib | Michael Raghib, Simon A. Levin and Ioannis G. Kevrekidis | Multiscale analysis of collective motion and decision-making in swarms:
An advection-diffusion equation with memory approach | 47 pages, 16 figures | Journal of Theoretical Biology 264, 893-913 (2010) | null | null | q-bio.PE nlin.AO | http://creativecommons.org/licenses/publicdomain/ | We propose a (time) multiscale method for the coarse-grained analysis of
self--propelled particle models of swarms comprising a mixture of `na\"{i}ve'
and `informed' individuals, used to address questions related to collective
motion and collective decision--making in animal groups. The method is based on
projecting the particle configuration onto a single `meta-particle' that
consists of the group elongation and the mean group velocity and position. The
collective states of the configuration can be associated with the transient and
asymptotic transport properties of the random walk followed by the
meta-particle. These properties can be accurately predicted by an
advection-diffusion equation with memory (ADEM) whose parameters are obtained
from a mean group velocity time series obtained from a single simulation run of
the individual-based model.
| [
{
"created": "Mon, 27 Feb 2012 19:03:54 GMT",
"version": "v1"
}
] | 2012-02-28 | [
[
"Raghib",
"Michael",
""
],
[
"Levin",
"Simon A.",
""
],
[
"Kevrekidis",
"Ioannis G.",
""
]
] | We propose a (time) multiscale method for the coarse-grained analysis of self--propelled particle models of swarms comprising a mixture of `na\"{i}ve' and `informed' individuals, used to address questions related to collective motion and collective decision--making in animal groups. The method is based on projecting the particle configuration onto a single `meta-particle' that consists of the group elongation and the mean group velocity and position. The collective states of the configuration can be associated with the transient and asymptotic transport properties of the random walk followed by the meta-particle. These properties can be accurately predicted by an advection-diffusion equation with memory (ADEM) whose parameters are obtained from a mean group velocity time series obtained from a single simulation run of the individual-based model. |
1209.5353 | Dante Chialvo | Ariel Haimovici, Enzo Tagliazucchi, Pablo Balenzuela and Dante R.
Chialvo | Brain organization into resting state networks emerges at criticality on
a model of the human connectome | Physical Review Letters, (2013 in press) | Phys. Rev. Lett. (110) 17, 178101 (2013) | 10.1103/PhysRevLett.110.178101 | null | q-bio.NC cond-mat.dis-nn physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The relation between large-scale brain structure and function is an
outstanding open problem in neuroscience. We approach this problem by studying
the dynamical regime under which realistic spatio-temporal patterns of brain
activity emerge from the empirically derived network of human brain
neuroanatomical connections. The results show that critical dynamics unfolding
on the structural connectivity of the human brain allow the recovery of many
key experimental findings obtained with functional Magnetic Resonance Imaging
(fMRI), such as divergence of the correlation length, anomalous scaling of
correlation fluctuations, and the emergence of large-scale resting state
networks.
| [
{
"created": "Mon, 24 Sep 2012 18:18:09 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Feb 2013 11:08:57 GMT",
"version": "v2"
}
] | 2014-05-27 | [
[
"Haimovici",
"Ariel",
""
],
[
"Tagliazucchi",
"Enzo",
""
],
[
"Balenzuela",
"Pablo",
""
],
[
"Chialvo",
"Dante R.",
""
]
] | The relation between large-scale brain structure and function is an outstanding open problem in neuroscience. We approach this problem by studying the dynamical regime under which realistic spatio-temporal patterns of brain activity emerge from the empirically derived network of human brain neuroanatomical connections. The results show that critical dynamics unfolding on the structural connectivity of the human brain allow the recovery of many key experimental findings obtained with functional Magnetic Resonance Imaging (fMRI), such as divergence of the correlation length, anomalous scaling of correlation fluctuations, and the emergence of large-scale resting state networks. |
1410.0936 | Ralph Brinks | Ralph Brinks | On the error of incidence estimation from prevalence data | 15 pages, 6 figures | null | null | null | q-bio.PE stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes types of errors arising in a recently proposed method of
incidence estimation from prevalence data. The errors are illustrated by a
simulation study about a hypothetical irreversible disease. In addition, a way
of obtaining error bounds in practical applications of the method is proposed.
| [
{
"created": "Fri, 3 Oct 2014 18:39:39 GMT",
"version": "v1"
}
] | 2014-10-06 | [
[
"Brinks",
"Ralph",
""
]
] | This paper describes types of errors arising in a recently proposed method of incidence estimation from prevalence data. The errors are illustrated by a simulation study about a hypothetical irreversible disease. In addition, a way of obtaining error bounds in practical applications of the method is proposed. |
1305.7125 | Ahmed El Hady | Andreas Neef, Ahmed El Hady, Jatin Nagpal, Kai Br\"oking, Ghazaleh
Afshar, Oliver M Schl\"uter, Theo Geisel, Ernst Bamberg, Ragnar Fleischmann,
Walter St\"uhmer, Fred Wolf | Continuous Dynamic Photostimulation - inducing in-vivo-like fluctuating
conductances with Channelrhodopsins | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Central neurons operate in a regime of constantly fluctuating conductances,
induced by thousands of presynaptic cells. Channelrhodopsins have been almost
exclusively used to imprint a fixed spike pattern by sequences of brief
depolarizations. Here we introduce continuous dynamic photostimulation
(CoDyPs), a novel approach to mimic in-vivo like input fluctuations
noninvasively in cells transfected with the weakly inactivating
channelrhodopsin variant ChIEF. Even during long-term experiments, cultured
neurons subjected to CoDyPs generate seemingly random, but reproducible spike
patterns. In voltage clamped cells CoDyPs induced highly reproducible current
waveforms that could be precisely predicted from the light-conductance transfer
function of ChIEF. CoDyPs can replace the conventional, flash-evoked imprinting
of spike patterns in in-vivo and in-vitro studies, preserving natural activity.
When combined with non-invasive spike-detection, CoDyPs allows the acquisition
of order of magnitudes larger data sets than previously possible, for studies
of dynamical response properties of many individual neurons.
| [
{
"created": "Thu, 30 May 2013 14:48:53 GMT",
"version": "v1"
},
{
"created": "Fri, 31 May 2013 09:04:34 GMT",
"version": "v2"
}
] | 2013-06-03 | [
[
"Neef",
"Andreas",
""
],
[
"Hady",
"Ahmed El",
""
],
[
"Nagpal",
"Jatin",
""
],
[
"Bröking",
"Kai",
""
],
[
"Afshar",
"Ghazaleh",
""
],
[
"Schlüter",
"Oliver M",
""
],
[
"Geisel",
"Theo",
""
],
[
"Bamberg",
"Ernst",
""
],
[
"Fleischmann",
"Ragnar",
""
],
[
"Stühmer",
"Walter",
""
],
[
"Wolf",
"Fred",
""
]
] | Central neurons operate in a regime of constantly fluctuating conductances, induced by thousands of presynaptic cells. Channelrhodopsins have been almost exclusively used to imprint a fixed spike pattern by sequences of brief depolarizations. Here we introduce continuous dynamic photostimulation (CoDyPs), a novel approach to mimic in-vivo like input fluctuations noninvasively in cells transfected with the weakly inactivating channelrhodopsin variant ChIEF. Even during long-term experiments, cultured neurons subjected to CoDyPs generate seemingly random, but reproducible spike patterns. In voltage clamped cells CoDyPs induced highly reproducible current waveforms that could be precisely predicted from the light-conductance transfer function of ChIEF. CoDyPs can replace the conventional, flash-evoked imprinting of spike patterns in in-vivo and in-vitro studies, preserving natural activity. When combined with non-invasive spike-detection, CoDyPs allows the acquisition of order of magnitudes larger data sets than previously possible, for studies of dynamical response properties of many individual neurons. |
1810.00611 | Bernard Lorber | Bernard Lorber | Analytical light scattering methods in molecular and structural biology:
Experimental aspects and results | with supllmentary material, 7 figures, 11 supplementary figures and
42 references | null | null | null | q-bio.BM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Non-invasive light scattering methods provide data on biological
macromolecules (i.e. proteins, nucleic acids, as well as assemblies and larger
entities composed of them) that are complementary with those of size exclusion
chromatography, gel electrophoresis, analytical ultracentrifugation and mass
spectrometry methods. Static light scattering measurements are useful to
determine the mass of macromolecules and to monitor aggregation phenomena.
Dynamic light scattering measurements are suitable for the quality control and
to assess sample homogeneity, to determine particle size, examine the effect of
physical and chemical treatments, probe the binding of ligands, and study
interactions between macromolecules.
| [
{
"created": "Mon, 1 Oct 2018 10:43:06 GMT",
"version": "v1"
}
] | 2018-10-02 | [
[
"Lorber",
"Bernard",
""
]
] | Non-invasive light scattering methods provide data on biological macromolecules (i.e. proteins, nucleic acids, as well as assemblies and larger entities composed of them) that are complementary with those of size exclusion chromatography, gel electrophoresis, analytical ultracentrifugation and mass spectrometry methods. Static light scattering measurements are useful to determine the mass of macromolecules and to monitor aggregation phenomena. Dynamic light scattering measurements are suitable for the quality control and to assess sample homogeneity, to determine particle size, examine the effect of physical and chemical treatments, probe the binding of ligands, and study interactions between macromolecules. |
2004.05513 | Marc Timme | Andreas Bossert, Moritz Kersting, Marc Timme, Malte Schr\"oder, Azza
Feki, Justin Coetzee and Jan Schl\"uter | Limited containment options of COVID-19 outbreak revealed by regional
agent-based simulations for South Africa | including 3 Figures and Supplementary Material | null | null | null | q-bio.PE physics.soc-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | COVID-19 has spread from China across Europe and the United States and has
become a global pandemic. In countries of the Global South, due to often weaker
socioeconomic options and health care systems, effective local countermeasures
remain debated. We combine large-scale socioeconomic and traffic survey data
with detailed agent-based simulations of local transportation to analyze
COVID-19 spreading in a regional model for the Nelson Mandela Bay Municipality
in South Africa under a range of countermeasure scenarios. The simulations
indicate that any realistic containment strategy, including those similar to
the one ongoing in South Africa, may yield a manifold overload of available
intensive care units (ICUs). Only immediate and the most severe
countermeasures, up to a complete lock-down that essentially inhibits all joint
human activities, can contain the epidemic effectively.
As South Africa exhibits rather favorable conditions compared to many other
countries of the Global South, our findings constitute rough conservative
estimates and may support identifying strategies towards containing COVID-19 as
well as any major future pandemics in these countries.
| [
{
"created": "Sun, 12 Apr 2020 00:47:40 GMT",
"version": "v1"
}
] | 2020-04-28 | [
[
"Bossert",
"Andreas",
""
],
[
"Kersting",
"Moritz",
""
],
[
"Timme",
"Marc",
""
],
[
"Schröder",
"Malte",
""
],
[
"Feki",
"Azza",
""
],
[
"Coetzee",
"Justin",
""
],
[
"Schlüter",
"Jan",
""
]
] | COVID-19 has spread from China across Europe and the United States and has become a global pandemic. In countries of the Global South, due to often weaker socioeconomic options and health care systems, effective local countermeasures remain debated. We combine large-scale socioeconomic and traffic survey data with detailed agent-based simulations of local transportation to analyze COVID-19 spreading in a regional model for the Nelson Mandela Bay Municipality in South Africa under a range of countermeasure scenarios. The simulations indicate that any realistic containment strategy, including those similar to the one ongoing in South Africa, may yield a manifold overload of available intensive care units (ICUs). Only immediate and the most severe countermeasures, up to a complete lock-down that essentially inhibits all joint human activities, can contain the epidemic effectively. As South Africa exhibits rather favorable conditions compared to many other countries of the Global South, our findings constitute rough conservative estimates and may support identifying strategies towards containing COVID-19 as well as any major future pandemics in these countries. |
1809.09210 | Evan Weissburg | Evan Weissburg, Ian Bulovic | Predicting protein secondary structure with Neural Machine Translation | 9 pages, 9 figures, 2 tables | null | null | null | q-bio.QM q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present analysis of a novel tool for protein secondary structure
prediction using the recently-investigated Neural Machine Translation
framework. The tool provides a fast and accurate folding prediction based on
primary structure with subsecond prediction time even for batched inputs. We
hypothesize that Neural Machine Translation can improve upon current predictive
accuracy by better encoding complex relationships between nearby but
non-adjacent amino acids. We overview our modifications to the framework in
order to improve accuracy on protein sequences. We report 65.9% Q3 accuracy and
analyze the strengths and weaknesses of our predictive model.
| [
{
"created": "Mon, 24 Sep 2018 20:36:58 GMT",
"version": "v1"
},
{
"created": "Sat, 8 May 2021 08:47:24 GMT",
"version": "v2"
}
] | 2021-05-11 | [
[
"Weissburg",
"Evan",
""
],
[
"Bulovic",
"Ian",
""
]
] | We present analysis of a novel tool for protein secondary structure prediction using the recently-investigated Neural Machine Translation framework. The tool provides a fast and accurate folding prediction based on primary structure with subsecond prediction time even for batched inputs. We hypothesize that Neural Machine Translation can improve upon current predictive accuracy by better encoding complex relationships between nearby but non-adjacent amino acids. We overview our modifications to the framework in order to improve accuracy on protein sequences. We report 65.9% Q3 accuracy and analyze the strengths and weaknesses of our predictive model. |
q-bio/0411022 | Trinh Xuan Hoang | Marek Cieplak, Annalisa Pastore, Trinh Xuan Hoang | Mechanical properties of the domains of titin in a Go-like model | 11 pages, 19 figures, to appear in J. Chem. Phys | null | 10.1063/1.1839572 | null | q-bio.BM cond-mat.soft | null | Comparison of properties of three domains of titin, I1, I27 and I28, in a
simple geometry-based model shows that despite a high structural homology
between their native states different domains show similar but distinguishable
mechanical properties. Folding properties of the separate domains are predicted
to be diversified which reflects sensitivity of the kinetics to the details of
native structures. The Go-like model corresponding to the experimentally
resolved native structure of the I1 domain is found to provide the biggest
thermodynamic and mechanical stability compared to the other domains studied
here. We analyze elastic, thermodynamic and kinetic properties of several
structures corresponding to the I28 domain as obtained through homology-based
modeling. We discuss the ability of the models of the I28 domain to reproduce
experimental results qualitatively. A strengthening of contacts that involve
hydrophobic amino acids does not affect theoretical comparisons of the domains.
Tandem linkages of up to five identical or different domains unravel in a
serial fashion at low temperatures. We study the nature of the intermediate
state that arises in the early stages of the serial unraveling and find it to
qualitatively agree with the results of Marszalek et al.
| [
{
"created": "Sun, 7 Nov 2004 07:04:12 GMT",
"version": "v1"
}
] | 2009-11-10 | [
[
"Cieplak",
"Marek",
""
],
[
"Pastore",
"Annalisa",
""
],
[
"Hoang",
"Trinh Xuan",
""
]
] | Comparison of properties of three domains of titin, I1, I27 and I28, in a simple geometry-based model shows that despite a high structural homology between their native states different domains show similar but distinguishable mechanical properties. Folding properties of the separate domains are predicted to be diversified which reflects sensitivity of the kinetics to the details of native structures. The Go-like model corresponding to the experimentally resolved native structure of the I1 domain is found to provide the biggest thermodynamic and mechanical stability compared to the other domains studied here. We analyze elastic, thermodynamic and kinetic properties of several structures corresponding to the I28 domain as obtained through homology-based modeling. We discuss the ability of the models of the I28 domain to reproduce experimental results qualitatively. A strengthening of contacts that involve hydrophobic amino acids does not affect theoretical comparisons of the domains. Tandem linkages of up to five identical or different domains unravel in a serial fashion at low temperatures. We study the nature of the intermediate state that arises in the early stages of the serial unraveling and find it to qualitatively agree with the results of Marszalek et al. |
1609.00959 | Ciaran Evans | Ciaran Evans, Johanna Hardin, Daniel Stoebel | Selecting between-sample RNA-Seq normalization methods from the
perspective of their assumptions | 20 pages, 6 figures, 1 table. Supplementary information contains 9
pages, 1 table. For associated simulation code, see
https://github.com/ciaranlevans/rnaSeqAssumptions | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | RNA-Seq is a widely-used method for studying the behavior of genes under
different biological conditions. An essential step in an RNA-Seq study is
normalization, in which raw data are adjusted to account for factors that
prevent direct comparison of expression measures. Errors in normalization can
have a significant impact on downstream analysis, such as inflated false
positives in differential expression analysis. An under-emphasized feature of
normalization is the assumptions upon which the methods rely and how the
validity of these assumptions can have a substantial impact on the performance
of the methods. In this paper, we explain how assumptions provide the link
between raw RNA-Seq read counts and meaningful measures of gene expression. We
examine normalization methods from the perspective of their assumptions, as an
understanding of methodological assumptions is necessary for choosing methods
appropriate for the data at hand. Furthermore, we discuss why normalization
methods perform poorly when their assumptions are violated and how this causes
problems in subsequent analysis. To analyze a biological experiment,
researchers must select a normalization method with assumptions that are met
and that produces a meaningful measure of expression for the given experiment.
| [
{
"created": "Sun, 4 Sep 2016 17:03:06 GMT",
"version": "v1"
}
] | 2016-09-06 | [
[
"Evans",
"Ciaran",
""
],
[
"Hardin",
"Johanna",
""
],
[
"Stoebel",
"Daniel",
""
]
] | RNA-Seq is a widely-used method for studying the behavior of genes under different biological conditions. An essential step in an RNA-Seq study is normalization, in which raw data are adjusted to account for factors that prevent direct comparison of expression measures. Errors in normalization can have a significant impact on downstream analysis, such as inflated false positives in differential expression analysis. An under-emphasized feature of normalization is the assumptions upon which the methods rely and how the validity of these assumptions can have a substantial impact on the performance of the methods. In this paper, we explain how assumptions provide the link between raw RNA-Seq read counts and meaningful measures of gene expression. We examine normalization methods from the perspective of their assumptions, as an understanding of methodological assumptions is necessary for choosing methods appropriate for the data at hand. Furthermore, we discuss why normalization methods perform poorly when their assumptions are violated and how this causes problems in subsequent analysis. To analyze a biological experiment, researchers must select a normalization method with assumptions that are met and that produces a meaningful measure of expression for the given experiment. |
0807.4344 | Oren Elrad | Oren M. Elrad and Michael F. Hagan | Mechanisms of Size Control and Polymorphism in Viral Capsid Assembly | null | Elrad, OM and Hagan MF: Nano Letters 8 : 3850 2008 | 10.1021/nl802269a | null | q-bio.BM cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We simulate the assembly dynamics of icosahedral capsids from subunits that
interconvert between different conformations (or quasi-equivalent states). The
simulations identify mechanisms by which subunits form empty capsids with only
one morphology but adaptively assemble into different icosahedral morphologies
around nanoparticle cargoes with varying sizes, as seen in recent experiments
with brome mosaic virus (BMV) capsid proteins. Adaptive cargo encapsidation
requires moderate cargo-subunit interaction strengths; stronger interactions
frustrate assembly by stabilizing intermediates with incommensurate curvature.
We compare simulation results to experiments with cowpea chlorotic mottle virus
empty capsids and BMV capsids assembled on functionalized nanoparticles and
suggest new cargo encapsidation experiments. Finally, we find that both empty
and templated capsids maintain the precise spatial ordering of subunit
conformations seen in the crystal structure even if interactions that preserve
this arrangement are favored by as little as the thermal energy, consistent
with experimental observations that different subunit conformations are highly
similar.
| [
{
"created": "Mon, 28 Jul 2008 03:22:22 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Oct 2008 19:20:30 GMT",
"version": "v2"
},
{
"created": "Mon, 29 Jun 2009 22:40:33 GMT",
"version": "v3"
}
] | 2009-09-29 | [
[
"Elrad",
"Oren M.",
""
],
[
"Hagan",
"Michael F.",
""
]
] | We simulate the assembly dynamics of icosahedral capsids from subunits that interconvert between different conformations (or quasi-equivalent states). The simulations identify mechanisms by which subunits form empty capsids with only one morphology but adaptively assemble into different icosahedral morphologies around nanoparticle cargoes with varying sizes, as seen in recent experiments with brome mosaic virus (BMV) capsid proteins. Adaptive cargo encapsidation requires moderate cargo-subunit interaction strengths; stronger interactions frustrate assembly by stabilizing intermediates with incommensurate curvature. We compare simulation results to experiments with cowpea chlorotic mottle virus empty capsids and BMV capsids assembled on functionalized nanoparticles and suggest new cargo encapsidation experiments. Finally, we find that both empty and templated capsids maintain the precise spatial ordering of subunit conformations seen in the crystal structure even if interactions that preserve this arrangement are favored by as little as the thermal energy, consistent with experimental observations that different subunit conformations are highly similar. |
1610.00181 | Jing Xu | Chai Lor, Joseph D. Lopes, Michelle K. Mattson-Hoss, Jing Xu, and
Linda S. Hirst | A Simple Experimental Model to Investigate Force Range for Membrane
Nanotube Formation | null | Frontiers in Materials, 3, 6 (2016) | 10.3389/fmats.2016.00006 | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The presence of membrane tubules in living cells is essential to many
biological processes. In cells, one mechanism to form nanosized lipid tubules
is via molecular motor induced bilayer extraction. In this paper, we describe a
simple experimental model to investigate the forces required for lipid tube
formation using kinesin motors anchored to DOPC vesicles. Previous related
studies have used molecular motors actively pulling on the membrane to extract
a nanotube. Here, we invert the system geometry; molecular motors are used as
static anchors linking DOPC vesicles to a two-dimensional microtubule network
and an external flow is introduced to generate nanotubes facilitated by the
drag force. We found that a drag force of ~7 pN was sufficient for tubule
extraction for vesicles ranging from 1 to 2 um in radius. By our method, we
found that the force generated by a single molecular motor was sufficient for
membrane tubule extraction from a spherical lipid vesicle.
| [
{
"created": "Sat, 1 Oct 2016 20:23:10 GMT",
"version": "v1"
}
] | 2016-10-04 | [
[
"Lor",
"Chai",
""
],
[
"Lopes",
"Joseph D.",
""
],
[
"Mattson-Hoss",
"Michelle K.",
""
],
[
"Xu",
"Jing",
""
],
[
"Hirst",
"Linda S.",
""
]
] | The presence of membrane tubules in living cells is essential to many biological processes. In cells, one mechanism to form nanosized lipid tubules is via molecular motor induced bilayer extraction. In this paper, we describe a simple experimental model to investigate the forces required for lipid tube formation using kinesin motors anchored to DOPC vesicles. Previous related studies have used molecular motors actively pulling on the membrane to extract a nanotube. Here, we invert the system geometry; molecular motors are used as static anchors linking DOPC vesicles to a two-dimensional microtubule network and an external flow is introduced to generate nanotubes facilitated by the drag force. We found that a drag force of ~7 pN was sufficient for tubule extraction for vesicles ranging from 1 to 2 um in radius. By our method, we found that the force generated by a single molecular motor was sufficient for membrane tubule extraction from a spherical lipid vesicle. |
1801.02769 | Jungwoo Lee | Jungwoo Lee, Sejoon Oh, Lee Sael | GIFT: Guided and Interpretable Factorization for Tensors - An
Application to Large-Scale Multi-platform Cancer Analysis | null | null | 10.1093/bioinformatics/bty490 | null | q-bio.GN cs.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given multi-platform genome data with prior knowledge of functional gene
sets, how can we extract interpretable latent relationships between patients
and genes? More specifically, how can we devise a tensor factorization method
which produces an interpretable gene factor matrix based on gene set
information while maintaining the decomposition quality and speed? We propose
GIFT, a Guided and Interpretable Factorization for Tensors. GIFT provides
interpretable factor matrices by encoding prior knowledge as a regularization
term in its objective function. Experiment results demonstrate that GIFT
produces interpretable factorizations with high scalability and accuracy, while
other methods lack interpretability. We apply GIFT to the PanCan12 dataset, and
GIFT reveals significant relations between cancers, gene sets, and genes, such
as influential gene sets for specific cancer (e.g., interferon-gamma response
gene set for ovarian cancer) or relations between cancers and genes (e.g., BRCA
cancer - APOA1 gene and OV, UCEC cancers - BST2 gene).
| [
{
"created": "Tue, 9 Jan 2018 02:59:08 GMT",
"version": "v1"
}
] | 2018-07-09 | [
[
"Lee",
"Jungwoo",
""
],
[
"Oh",
"Sejoon",
""
],
[
"Sael",
"Lee",
""
]
] | Given multi-platform genome data with prior knowledge of functional gene sets, how can we extract interpretable latent relationships between patients and genes? More specifically, how can we devise a tensor factorization method which produces an interpretable gene factor matrix based on gene set information while maintaining the decomposition quality and speed? We propose GIFT, a Guided and Interpretable Factorization for Tensors. GIFT provides interpretable factor matrices by encoding prior knowledge as a regularization term in its objective function. Experiment results demonstrate that GIFT produces interpretable factorizations with high scalability and accuracy, while other methods lack interpretability. We apply GIFT to the PanCan12 dataset, and GIFT reveals significant relations between cancers, gene sets, and genes, such as influential gene sets for specific cancer (e.g., interferon-gamma response gene set for ovarian cancer) or relations between cancers and genes (e.g., BRCA cancer - APOA1 gene and OV, UCEC cancers - BST2 gene). |
1408.5371 | Robert Leech | Gregory Scott, Peter J. Hellyer, Adam Hampshire, Robert Leech | Exploring spatiotemporal network transitions in task functional MRI | 43 pages, 7 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A critical question for cognitive neuroscience regards how transitions
between cognitive states emerge from the dynamic activity of functional brain
networks. However, current methodologies cannot easily evaluate both the
spatial and temporal changes in brain networks with cognitive state. Here we
combine a simple data reorganization with spatial ICA, enabling a
spatiotemporal ICA (stICA) which captures the consistent evolution of networks
during onset and offset of a task. The technique was applied to FMRI datasets
involving alternating between rest and task and to simple synthetic data.
Starting and finishing time-points of periods of interest (anchors) were
defined at task block onsets and offsets. For each subject, the ten volumes
following each anchor were extracted and concatenated spatially, producing a
single 3D sample. Samples for all anchors and subjects were concatenated along
the fourth dimension. This 4D dataset was decomposed using ICA into
spatiotemporal components. One component exhibited the transition with task
onset from a default mode network (DMN) becoming less active to a
fronto-parietal control network (FPCN) becoming more active. We observed other
changes with relevance to understanding network dynamics, e.g., the DMN showed
a changing spatial distribution, shifting to an anterior/superior pattern of
deactivation during task from a posterior/inferior pattern during rest. By
anchoring analyses to periods associated with the onsets and offsets of task,
our approach reveals novel aspects of the dynamics of network activity
accompanying these transitions. Importantly, these findings were observed
without specifying a priori either the spatial networks or the task time
courses.
| [
{
"created": "Fri, 22 Aug 2014 17:58:42 GMT",
"version": "v1"
}
] | 2014-08-25 | [
[
"Scott",
"Gregory",
""
],
[
"Hellyer",
"Peter J.",
""
],
[
"Hampshire",
"Adam",
""
],
[
"Leech",
"Robert",
""
]
] | A critical question for cognitive neuroscience regards how transitions between cognitive states emerge from the dynamic activity of functional brain networks. However, current methodologies cannot easily evaluate both the spatial and temporal changes in brain networks with cognitive state. Here we combine a simple data reorganization with spatial ICA, enabling a spatiotemporal ICA (stICA) which captures the consistent evolution of networks during onset and offset of a task. The technique was applied to FMRI datasets involving alternating between rest and task and to simple synthetic data. Starting and finishing time-points of periods of interest (anchors) were defined at task block onsets and offsets. For each subject, the ten volumes following each anchor were extracted and concatenated spatially, producing a single 3D sample. Samples for all anchors and subjects were concatenated along the fourth dimension. This 4D dataset was decomposed using ICA into spatiotemporal components. One component exhibited the transition with task onset from a default mode network (DMN) becoming less active to a fronto-parietal control network (FPCN) becoming more active. We observed other changes with relevance to understanding network dynamics, e.g., the DMN showed a changing spatial distribution, shifting to an anterior/superior pattern of deactivation during task from a posterior/inferior pattern during rest. By anchoring analyses to periods associated with the onsets and offsets of task, our approach reveals novel aspects of the dynamics of network activity accompanying these transitions. Importantly, these findings were observed without specifying a priori either the spatial networks or the task time courses. |
2010.05079 | Indrajit Ghosh | Tanujit Chakraborty, Indrajit Ghosh, Tirna Mahajan and Tejasvi Arora | Nowcasting of COVID-19 confirmed cases: Foundations, trends, and
challenges | null | null | null | null | q-bio.PE stat.AP | http://creativecommons.org/licenses/by/4.0/ | The coronavirus disease 2019 (COVID-19) has become a public health emergency
of international concern affecting more than 200 countries and territories
worldwide. As of September 30, 2020, it has caused a pandemic outbreak with
more than 33 million confirmed infections and more than 1 million reported
deaths worldwide. Several statistical, machine learning, and hybrid models have
previously tried to forecast COVID-19 confirmed cases for profoundly affected
countries. Due to extreme uncertainty and nonstationarity in the time series
data, forecasting of COVID-19 confirmed cases has become a very challenging
job. For univariate time series forecasting, there are various statistical and
machine learning models available in the literature. But, epidemic forecasting
has a dubious track record. Its failures became more prominent due to
insufficient data input, flaws in modeling assumptions, high sensitivity of
estimates, lack of incorporation of epidemiological features, inadequate past
evidence on effects of available interventions, lack of transparency, errors,
lack of determinacy, and lack of expertise in crucial disciplines. This chapter
focuses on assessing different short-term forecasting models that can forecast
the daily COVID-19 cases for various countries. In the form of an empirical
study on forecasting accuracy, this chapter provides evidence to show that
there is no universal method available that can accurately forecast pandemic
data. Still, forecasters' predictions are useful for the effective allocation
of healthcare resources and will act as an early-warning system for government
policymakers.
| [
{
"created": "Sat, 10 Oct 2020 19:51:45 GMT",
"version": "v1"
}
] | 2020-10-13 | [
[
"Chakraborty",
"Tanujit",
""
],
[
"Ghosh",
"Indrajit",
""
],
[
"Mahajan",
"Tirna",
""
],
[
"Arora",
"Tejasvi",
""
]
] | The coronavirus disease 2019 (COVID-19) has become a public health emergency of international concern affecting more than 200 countries and territories worldwide. As of September 30, 2020, it has caused a pandemic outbreak with more than 33 million confirmed infections and more than 1 million reported deaths worldwide. Several statistical, machine learning, and hybrid models have previously tried to forecast COVID-19 confirmed cases for profoundly affected countries. Due to extreme uncertainty and nonstationarity in the time series data, forecasting of COVID-19 confirmed cases has become a very challenging job. For univariate time series forecasting, there are various statistical and machine learning models available in the literature. But, epidemic forecasting has a dubious track record. Its failures became more prominent due to insufficient data input, flaws in modeling assumptions, high sensitivity of estimates, lack of incorporation of epidemiological features, inadequate past evidence on effects of available interventions, lack of transparency, errors, lack of determinacy, and lack of expertise in crucial disciplines. This chapter focuses on assessing different short-term forecasting models that can forecast the daily COVID-19 cases for various countries. In the form of an empirical study on forecasting accuracy, this chapter provides evidence to show that there is no universal method available that can accurately forecast pandemic data. Still, forecasters' predictions are useful for the effective allocation of healthcare resources and will act as an early-warning system for government policymakers. |
2309.12435 | Nishchal Dwivedi | Akshita Patil and Nishchal Dwivedi | A comparative data study on dinosaur, bird and human bone attributes --
A supporting study for convergent evolution | 13 pages, 14 figures, 7 tables | null | null | null | q-bio.PE physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For over 165 million years, dinosaurs reigned on this planet. Their entire
existence saw variations in their body size and mass . Understanding the
relationship between various attributes such as femur length, breadth; humerus
length, breadth; tibia length, breadth and body mass of dinosaurs contributes
to our understanding of the Jurassic era and further provides reasoning for
bone and body size evolution of modern day descendants of those from the
Dinosauria clade. The following work consists of statistical evidence derived
from an encyclopedic data set consisting of a wide variety of measurements
pertaining to discovered fossils of a particular taxa of dinosaur. Our study
establishes linearly regressive correspondence between femur and humerus length
and radii. Furthermore, there is also a comparison with terrestrial bird bone
lengths, to verify the claim of birds being closest alive species to dinosaurs.
An analysis into bone ratios of early humans shows that terrestrial birds are
closer to humans than that of dinosaurs. Not only on one hand it challenges the
closeness of birds with dinosaurs, but on the other hand it makes a case of
convergent evolution between birds and humans, due to their closeness in
regressive fits.
A correlation between bone ratios of dinosaurs and early humans also advances
understanding in the structural and physical distinctions between the two
species. Overall, the work contains evaluation of dinosaur skeletons and
promotes further exploration and research in the paleontological field to
strengthen the conclusions drawn thus far.
| [
{
"created": "Thu, 21 Sep 2023 19:06:22 GMT",
"version": "v1"
}
] | 2023-09-25 | [
[
"Patil",
"Akshita",
""
],
[
"Dwivedi",
"Nishchal",
""
]
] | For over 165 million years, dinosaurs reigned on this planet. Their entire existence saw variations in their body size and mass . Understanding the relationship between various attributes such as femur length, breadth; humerus length, breadth; tibia length, breadth and body mass of dinosaurs contributes to our understanding of the Jurassic era and further provides reasoning for bone and body size evolution of modern day descendants of those from the Dinosauria clade. The following work consists of statistical evidence derived from an encyclopedic data set consisting of a wide variety of measurements pertaining to discovered fossils of a particular taxa of dinosaur. Our study establishes linearly regressive correspondence between femur and humerus length and radii. Furthermore, there is also a comparison with terrestrial bird bone lengths, to verify the claim of birds being closest alive species to dinosaurs. An analysis into bone ratios of early humans shows that terrestrial birds are closer to humans than that of dinosaurs. Not only on one hand it challenges the closeness of birds with dinosaurs, but on the other hand it makes a case of convergent evolution between birds and humans, due to their closeness in regressive fits. A correlation between bone ratios of dinosaurs and early humans also advances understanding in the structural and physical distinctions between the two species. Overall, the work contains evaluation of dinosaur skeletons and promotes further exploration and research in the paleontological field to strengthen the conclusions drawn thus far. |
2007.01902 | Aanchal Mongia | Aanchal Mongia, Sanjay Kr. Saha, Emilie Chouzenoux and Angshul
Majumdar | A computational approach to aid clinicians in selecting anti-viral drugs
for COVID-19 trials | null | null | null | null | q-bio.QM | http://creativecommons.org/publicdomain/zero/1.0/ | COVID-19 has fast-paced drug re-positioning for its treatment. This work
builds computational models for the same. The aim is to assist clinicians with
a tool for selecting prospective antiviral treatments. Since the virus is known
to mutate fast, the tool is likely to help clinicians in selecting the right
set of antivirals for the mutated isolate.
The main contribution of this work is a manually curated database publicly
shared, comprising of existing associations between viruses and their
corresponding antivirals. The database gathers similarity information using the
chemical structure of drugs and the genomic structure of viruses. Along with
this database, we make available a set of state-of-the-art computational drug
re-positioning tools based on matrix completion. The tools are first analysed
on a standard set of experimental protocols for drug target interactions. The
best performing ones are applied for the task of re-positioning antivirals for
COVID-19. These tools select six drugs out of which four are currently under
various stages of trial, namely Remdesivir (as a cure), Ribavarin (in
combination with others for cure), Umifenovir (as a prophylactic and cure) and
Sofosbuvir (as a cure). Another unanimous prediction is Tenofovir alafenamide,
which is a novel tenofovir prodrug developed in order to improve renal safety
when compared to the counterpart tenofovir disoproxil. Both are under trail,
the former as a cure and the latter as a prophylactic. These results establish
that the computational methods are in sync with the state-of-practice. We also
demonstrate how the selected drugs change as the SARS-Cov-2 mutates over time,
suggesting the importance of such a tool in drug prediction.
The dataset and software is available publicly at
https://github.com/aanchalMongia/DVA and the prediction tool with a
user-friendly interface is available at http://dva.salsa.iiitd.edu.in.
| [
{
"created": "Fri, 3 Jul 2020 18:30:14 GMT",
"version": "v1"
},
{
"created": "Fri, 31 Jul 2020 14:54:33 GMT",
"version": "v2"
}
] | 2020-08-03 | [
[
"Mongia",
"Aanchal",
""
],
[
"Saha",
"Sanjay Kr.",
""
],
[
"Chouzenoux",
"Emilie",
""
],
[
"Majumdar",
"Angshul",
""
]
] | COVID-19 has fast-paced drug re-positioning for its treatment. This work builds computational models for the same. The aim is to assist clinicians with a tool for selecting prospective antiviral treatments. Since the virus is known to mutate fast, the tool is likely to help clinicians in selecting the right set of antivirals for the mutated isolate. The main contribution of this work is a manually curated database publicly shared, comprising of existing associations between viruses and their corresponding antivirals. The database gathers similarity information using the chemical structure of drugs and the genomic structure of viruses. Along with this database, we make available a set of state-of-the-art computational drug re-positioning tools based on matrix completion. The tools are first analysed on a standard set of experimental protocols for drug target interactions. The best performing ones are applied for the task of re-positioning antivirals for COVID-19. These tools select six drugs out of which four are currently under various stages of trial, namely Remdesivir (as a cure), Ribavarin (in combination with others for cure), Umifenovir (as a prophylactic and cure) and Sofosbuvir (as a cure). Another unanimous prediction is Tenofovir alafenamide, which is a novel tenofovir prodrug developed in order to improve renal safety when compared to the counterpart tenofovir disoproxil. Both are under trail, the former as a cure and the latter as a prophylactic. These results establish that the computational methods are in sync with the state-of-practice. We also demonstrate how the selected drugs change as the SARS-Cov-2 mutates over time, suggesting the importance of such a tool in drug prediction. The dataset and software is available publicly at https://github.com/aanchalMongia/DVA and the prediction tool with a user-friendly interface is available at http://dva.salsa.iiitd.edu.in. |
2302.08985 | David Clark | David G. Clark, L.F. Abbott | Theory of coupled neuronal-synaptic dynamics | 20 pages, 9 figures | null | null | null | q-bio.NC cond-mat.dis-nn cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In neural circuits, synaptic strengths influence neuronal activity by shaping
network dynamics, and neuronal activity influences synaptic strengths through
activity-dependent plasticity. Motivated by this fact, we study a
recurrent-network model in which neuronal units and synaptic couplings are
interacting dynamic variables, with couplings subject to Hebbian modification
with decay around quenched random strengths. Rather than assigning a specific
role to the plasticity, we use dynamical mean-field theory and other techniques
to systematically characterize the neuronal-synaptic dynamics, revealing a rich
phase diagram. Adding Hebbian plasticity slows activity in chaotic networks and
can induce chaos in otherwise quiescent networks. Anti-Hebbian plasticity
quickens activity and produces an oscillatory component. Analysis of the
Jacobian shows that Hebbian and anti-Hebbian plasticity push locally unstable
modes toward the real and imaginary axes, explaining these behaviors. Both
random-matrix and Lyapunov analysis show that strong Hebbian plasticity
segregates network timescales into two bands with a slow, synapse-dominated
band driving the dynamics, suggesting a flipped view of the network as synapses
connected by neurons. For increasing strength, Hebbian plasticity initially
raises the complexity of the dynamics, measured by the maximum Lyapunov
exponent and attractor dimension, but then decreases these metrics, likely due
to the proliferation of stable fixed points. We compute the marginally stable
spectra of such fixed points as well as their number, showing exponential
growth with network size. In chaotic states with strong Hebbian plasticity, a
stable fixed point of neuronal dynamics is destabilized by synaptic dynamics,
allowing any neuronal state to be stored as a stable fixed point by halting the
plasticity. This phase of freezable chaos offers a new mechanism for working
memory.
| [
{
"created": "Fri, 17 Feb 2023 16:42:59 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Jan 2024 22:41:29 GMT",
"version": "v2"
}
] | 2024-01-12 | [
[
"Clark",
"David G.",
""
],
[
"Abbott",
"L. F.",
""
]
] | In neural circuits, synaptic strengths influence neuronal activity by shaping network dynamics, and neuronal activity influences synaptic strengths through activity-dependent plasticity. Motivated by this fact, we study a recurrent-network model in which neuronal units and synaptic couplings are interacting dynamic variables, with couplings subject to Hebbian modification with decay around quenched random strengths. Rather than assigning a specific role to the plasticity, we use dynamical mean-field theory and other techniques to systematically characterize the neuronal-synaptic dynamics, revealing a rich phase diagram. Adding Hebbian plasticity slows activity in chaotic networks and can induce chaos in otherwise quiescent networks. Anti-Hebbian plasticity quickens activity and produces an oscillatory component. Analysis of the Jacobian shows that Hebbian and anti-Hebbian plasticity push locally unstable modes toward the real and imaginary axes, explaining these behaviors. Both random-matrix and Lyapunov analysis show that strong Hebbian plasticity segregates network timescales into two bands with a slow, synapse-dominated band driving the dynamics, suggesting a flipped view of the network as synapses connected by neurons. For increasing strength, Hebbian plasticity initially raises the complexity of the dynamics, measured by the maximum Lyapunov exponent and attractor dimension, but then decreases these metrics, likely due to the proliferation of stable fixed points. We compute the marginally stable spectra of such fixed points as well as their number, showing exponential growth with network size. In chaotic states with strong Hebbian plasticity, a stable fixed point of neuronal dynamics is destabilized by synaptic dynamics, allowing any neuronal state to be stored as a stable fixed point by halting the plasticity. This phase of freezable chaos offers a new mechanism for working memory. |
1309.3531 | Giovanna De Palo | Giovanna De Palo and Robert G. Endres | Unraveling Adaptation in Eukaryotic Pathways: Lessons from Protocells | accepted for publication in PLoS Computational Biology; 19 pages, 8
figures | null | 10.1371/journal.pcbi.1003300 | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Eukaryotic adaptation pathways operate within wide-ranging environmental
conditions without stimulus saturation. Despite numerous differences in the
adaptation mechanisms employed by bacteria and eukaryotes, all require energy
consumption. Here, we present two minimal models showing that expenditure of
energy by the cell is not essential for adaptation. Both models share important
features with large eukaryotic cells: they employ small diffusible molecules
and involve receptor subunits resembling highly conserved G-protein cascades.
Analyzing the drawbacks of these models helps us understand the benefits of
energy consumption, in terms of adjustability of response and adaptation times
as well as separation of cell-external sensing and cell-internal signaling. Our
work thus sheds new light on the evolution of adaptation mechanisms in complex
systems.
| [
{
"created": "Fri, 13 Sep 2013 18:16:58 GMT",
"version": "v1"
}
] | 2014-03-05 | [
[
"De Palo",
"Giovanna",
""
],
[
"Endres",
"Robert G.",
""
]
] | Eukaryotic adaptation pathways operate within wide-ranging environmental conditions without stimulus saturation. Despite numerous differences in the adaptation mechanisms employed by bacteria and eukaryotes, all require energy consumption. Here, we present two minimal models showing that expenditure of energy by the cell is not essential for adaptation. Both models share important features with large eukaryotic cells: they employ small diffusible molecules and involve receptor subunits resembling highly conserved G-protein cascades. Analyzing the drawbacks of these models helps us understand the benefits of energy consumption, in terms of adjustability of response and adaptation times as well as separation of cell-external sensing and cell-internal signaling. Our work thus sheds new light on the evolution of adaptation mechanisms in complex systems. |
2403.15274 | Gangqing Hu | Jinge Wang, Zien Cheng, Qiuming Yao, Li Liu, Dong Xu, Gangqing Hu | Bioinformatics and Biomedical Informatics with ChatGPT: Year One Review | Peer-reviewed and accepted by Quantitative Biology | null | null | null | q-bio.OT cs.AI | http://creativecommons.org/licenses/by/4.0/ | The year 2023 marked a significant surge in the exploration of applying large
language model (LLM) chatbots, notably ChatGPT, across various disciplines. We
surveyed the applications of ChatGPT in bioinformatics and biomedical
informatics throughout the year, covering omics, genetics, biomedical text
mining, drug discovery, biomedical image understanding, bioinformatics
programming, and bioinformatics education. Our survey delineates the current
strengths and limitations of this chatbot in bioinformatics and offers insights
into potential avenues for future developments.
| [
{
"created": "Fri, 22 Mar 2024 15:16:23 GMT",
"version": "v1"
},
{
"created": "Wed, 12 Jun 2024 15:50:31 GMT",
"version": "v2"
}
] | 2024-06-13 | [
[
"Wang",
"Jinge",
""
],
[
"Cheng",
"Zien",
""
],
[
"Yao",
"Qiuming",
""
],
[
"Liu",
"Li",
""
],
[
"Xu",
"Dong",
""
],
[
"Hu",
"Gangqing",
""
]
] | The year 2023 marked a significant surge in the exploration of applying large language model (LLM) chatbots, notably ChatGPT, across various disciplines. We surveyed the applications of ChatGPT in bioinformatics and biomedical informatics throughout the year, covering omics, genetics, biomedical text mining, drug discovery, biomedical image understanding, bioinformatics programming, and bioinformatics education. Our survey delineates the current strengths and limitations of this chatbot in bioinformatics and offers insights into potential avenues for future developments. |
1108.4841 | Alain Barrat | Juliette Stehl\'e, Nicolas Voirin, Alain Barrat, Ciro Cattuto,
Vittoria Colizza, Lorenzo Isella, Corinne R\'egis, Jean-Fran\c{c}ois Pinton,
Nagham Khanafer, Wouter Van den Broeck, Philippe Vanhems | Simulation of an SEIR infectious disease model on the dynamic contact
network of conference attendees | null | BMC Medicine 2011, 9:87 | 10.1186/1741-7015-9-87 | null | q-bio.QM physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The spread of infectious diseases crucially depends on the pattern of
contacts among individuals. Knowledge of these patterns is thus essential to
inform models and computational efforts. Few empirical studies are however
available that provide estimates of the number and duration of contacts among
social groups. Moreover, their space and time resolution are limited, so that
data is not explicit at the person-to-person level, and the dynamical aspect of
the contacts is disregarded. Here, we want to assess the role of data-driven
dynamic contact patterns among individuals, and in particular of their temporal
aspects, in shaping the spread of a simulated epidemic in the population.
We consider high resolution data of face-to-face interactions between the
attendees of a conference, obtained from the deployment of an infrastructure
based on Radio Frequency Identification (RFID) devices that assess mutual
face-to-face proximity. The spread of epidemics along these interactions is
simulated through an SEIR model, using both the dynamical network of contacts
defined by the collected data, and two aggregated versions of such network, in
order to assess the role of the data temporal aspects.
We show that, on the timescales considered, an aggregated network taking into
account the daily duration of contacts is a good approximation to the full
resolution network, whereas a homogeneous representation which retains only the
topology of the contact network fails in reproducing the size of the epidemic.
These results have important implications in understanding the level of
detail needed to correctly inform computational models for the study and
management of real epidemics.
| [
{
"created": "Wed, 24 Aug 2011 13:52:28 GMT",
"version": "v1"
}
] | 2011-08-25 | [
[
"Stehlé",
"Juliette",
""
],
[
"Voirin",
"Nicolas",
""
],
[
"Barrat",
"Alain",
""
],
[
"Cattuto",
"Ciro",
""
],
[
"Colizza",
"Vittoria",
""
],
[
"Isella",
"Lorenzo",
""
],
[
"Régis",
"Corinne",
""
],
[
"Pinton",
"Jean-François",
""
],
[
"Khanafer",
"Nagham",
""
],
[
"Broeck",
"Wouter Van den",
""
],
[
"Vanhems",
"Philippe",
""
]
] | The spread of infectious diseases crucially depends on the pattern of contacts among individuals. Knowledge of these patterns is thus essential to inform models and computational efforts. Few empirical studies are however available that provide estimates of the number and duration of contacts among social groups. Moreover, their space and time resolution are limited, so that data is not explicit at the person-to-person level, and the dynamical aspect of the contacts is disregarded. Here, we want to assess the role of data-driven dynamic contact patterns among individuals, and in particular of their temporal aspects, in shaping the spread of a simulated epidemic in the population. We consider high resolution data of face-to-face interactions between the attendees of a conference, obtained from the deployment of an infrastructure based on Radio Frequency Identification (RFID) devices that assess mutual face-to-face proximity. The spread of epidemics along these interactions is simulated through an SEIR model, using both the dynamical network of contacts defined by the collected data, and two aggregated versions of such network, in order to assess the role of the data temporal aspects. We show that, on the timescales considered, an aggregated network taking into account the daily duration of contacts is a good approximation to the full resolution network, whereas a homogeneous representation which retains only the topology of the contact network fails in reproducing the size of the epidemic. These results have important implications in understanding the level of detail needed to correctly inform computational models for the study and management of real epidemics. |
2101.00950 | Irene Buvat | AS Dirand, F Frouin, I Buvat | A downsampling strategy to assess the predictive value of radiomic
features | null | null | null | null | q-bio.QM eess.IV physics.data-an | http://creativecommons.org/licenses/by/4.0/ | Many studies are devoted to the design of radiomic models for a prediction
task. When no effective model is found, it is often difficult to know whether
the radiomic features do not include information relevant to the task or
because of insufficient data. We propose a downsampling method to answer that
question when considering a classification task into two groups. Using two
large patient cohorts, several experimental configurations involving different
numbers of patients were created. Univariate or multivariate radiomic models
were designed from each configuration. Their performance as reflected by the
Youden index (YI) and Area Under the receiver operating characteristic Curve
(AUC) was compared to the stable performance obtained with the highest number
of patients. A downsampling method is described to predict the YI and AUC
achievable with a large number of patients. Using the multivariate models
involving machine learning, YI and AUC increased with the number of patients
while they decreased for univariate models. The downsampling method better
estimated YI and AUC obtained with the largest number of patients than the YI
and AUC obtained using the number of available patients and identifies the lack
of information relevant to the classification task when no such information
exists.
| [
{
"created": "Mon, 28 Dec 2020 10:33:57 GMT",
"version": "v1"
}
] | 2021-01-05 | [
[
"Dirand",
"AS",
""
],
[
"Frouin",
"F",
""
],
[
"Buvat",
"I",
""
]
] | Many studies are devoted to the design of radiomic models for a prediction task. When no effective model is found, it is often difficult to know whether the radiomic features do not include information relevant to the task or because of insufficient data. We propose a downsampling method to answer that question when considering a classification task into two groups. Using two large patient cohorts, several experimental configurations involving different numbers of patients were created. Univariate or multivariate radiomic models were designed from each configuration. Their performance as reflected by the Youden index (YI) and Area Under the receiver operating characteristic Curve (AUC) was compared to the stable performance obtained with the highest number of patients. A downsampling method is described to predict the YI and AUC achievable with a large number of patients. Using the multivariate models involving machine learning, YI and AUC increased with the number of patients while they decreased for univariate models. The downsampling method better estimated YI and AUC obtained with the largest number of patients than the YI and AUC obtained using the number of available patients and identifies the lack of information relevant to the classification task when no such information exists. |
1805.01277 | Gerrit Ecke | Gerrit A. Ecke, Fabian A. Mikulasch, Sebastian A. Bruijns, Thede
Witschel, Aristides B. Arrenberg and Hanspeter A. Mallot | Sparse Coding Predicts Optic Flow Specificities of Zebrafish Pretectal
Neurons | Published Conference Paper from ICANN 2018, Rhodes | Artificial Neural Networks and Machine Learning - ICANN 2018.
ICANN 2018. Lecture Notes in Computer Science, vol 11141. Springer, Cham | 10.1007/978-3-030-01424-7_64 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Zebrafish pretectal neurons exhibit specificities for large-field optic flow
patterns associated with rotatory or translatory body motion. We investigate
the hypothesis that these specificities reflect the input statistics of natural
optic flow. Realistic motion sequences were generated using computer graphics
simulating self-motion in an underwater scene. Local retinal motion was
estimated with a motion detector and encoded in four populations of
directionally tuned retinal ganglion cells, represented as two signed input
variables. This activity was then used as input into one of two learning
networks: a sparse coding network (competitive learning) and backpropagation
network (supervised learning). Both simulations develop specificities for optic
flow which are comparable to those found in a neurophysiological study (Kubo et
al. 2014), and relative frequencies of the various neuronal responses are best
modeled by the sparse coding approach. We conclude that the optic flow neurons
in the zebrafish pretectum do reflect the optic flow statistics. The predicted
vectorial receptive fields show typical optic flow fields but also "Gabor" and
dipole-shaped patterns that likely reflect difference fields needed for
reconstruction by linear superposition.
| [
{
"created": "Thu, 3 May 2018 13:10:50 GMT",
"version": "v1"
},
{
"created": "Thu, 24 May 2018 12:02:08 GMT",
"version": "v2"
},
{
"created": "Thu, 26 Jul 2018 08:45:02 GMT",
"version": "v3"
},
{
"created": "Tue, 9 Oct 2018 14:30:36 GMT",
"version": "v4"
}
] | 2018-10-10 | [
[
"Ecke",
"Gerrit A.",
""
],
[
"Mikulasch",
"Fabian A.",
""
],
[
"Bruijns",
"Sebastian A.",
""
],
[
"Witschel",
"Thede",
""
],
[
"Arrenberg",
"Aristides B.",
""
],
[
"Mallot",
"Hanspeter A.",
""
]
] | Zebrafish pretectal neurons exhibit specificities for large-field optic flow patterns associated with rotatory or translatory body motion. We investigate the hypothesis that these specificities reflect the input statistics of natural optic flow. Realistic motion sequences were generated using computer graphics simulating self-motion in an underwater scene. Local retinal motion was estimated with a motion detector and encoded in four populations of directionally tuned retinal ganglion cells, represented as two signed input variables. This activity was then used as input into one of two learning networks: a sparse coding network (competitive learning) and backpropagation network (supervised learning). Both simulations develop specificities for optic flow which are comparable to those found in a neurophysiological study (Kubo et al. 2014), and relative frequencies of the various neuronal responses are best modeled by the sparse coding approach. We conclude that the optic flow neurons in the zebrafish pretectum do reflect the optic flow statistics. The predicted vectorial receptive fields show typical optic flow fields but also "Gabor" and dipole-shaped patterns that likely reflect difference fields needed for reconstruction by linear superposition. |
1412.2788 | Brian Williams Dr | Brian G. Williams | Fitting and projecting HIV epidemics: Data, structure and parsimony | 12 pages | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding historical trends in the epidemic of HIV is important for
assessing current and projecting future trends in prevalence, incidence and
mortality and for evaluating the impact and cost-effectiveness of control
measures. In generalized epidemics the available data are of variable quality
among countries and limited mainly to trends in the prevalence of HIV among
women attending ante-natal clinics. In concentrated epidemics one needs, at the
very least, time trends in the prevalence of HIV among different risk groups,
including intravenous drug users, men-who-have-sex-with-men, and commercial sex
workers as well as the size of each group and the degree of overlap between
them. Here we focus on the comparatively straight forward problems presented by
generalized epidemics. We fit data from Kenya to a susceptible-infected model
and then successively add structure to the model, drawing on our knowledge of
the natural history of HIV, to explore the effect that different structural
aspects of the model have on the fits and the projections.
Both heterogeneity in risk and changes in behaviour over time are important
but easily confounded. Using a Weibull rather than exponential survival
function for people infected with HIV, in the absence of treatment, makes a
significant difference to the estimated trends in incidence and mortality and
to the projected trends. Allowing for population growth has a small effect on
the fits and the projections but is easy to include. Including details of the
demography adds substantially to the complexity of the model, increases the run
time by several orders of magnitude, but changes the fits and projections only
slightly and to an extent that is less than the uncertainty inherent in the
data. We make specific recommendations for the kind of model that would be
suitable for understanding and managing HIV epidemics in east and southern
Africa.
| [
{
"created": "Sat, 22 Nov 2014 20:55:59 GMT",
"version": "v1"
}
] | 2014-12-10 | [
[
"Williams",
"Brian G.",
""
]
] | Understanding historical trends in the epidemic of HIV is important for assessing current and projecting future trends in prevalence, incidence and mortality and for evaluating the impact and cost-effectiveness of control measures. In generalized epidemics the available data are of variable quality among countries and limited mainly to trends in the prevalence of HIV among women attending ante-natal clinics. In concentrated epidemics one needs, at the very least, time trends in the prevalence of HIV among different risk groups, including intravenous drug users, men-who-have-sex-with-men, and commercial sex workers as well as the size of each group and the degree of overlap between them. Here we focus on the comparatively straight forward problems presented by generalized epidemics. We fit data from Kenya to a susceptible-infected model and then successively add structure to the model, drawing on our knowledge of the natural history of HIV, to explore the effect that different structural aspects of the model have on the fits and the projections. Both heterogeneity in risk and changes in behaviour over time are important but easily confounded. Using a Weibull rather than exponential survival function for people infected with HIV, in the absence of treatment, makes a significant difference to the estimated trends in incidence and mortality and to the projected trends. Allowing for population growth has a small effect on the fits and the projections but is easy to include. Including details of the demography adds substantially to the complexity of the model, increases the run time by several orders of magnitude, but changes the fits and projections only slightly and to an extent that is less than the uncertainty inherent in the data. We make specific recommendations for the kind of model that would be suitable for understanding and managing HIV epidemics in east and southern Africa. |
1405.5944 | Julio Augusto Freyre-Gonz\'alez | Julio A. Freyre-Gonz\'alez, Jos\'e A. Alonso-Pav\'on, Luis G.
Trevi\~no-Quintanilla, Julio Collado-Vides | Functional architecture of Escherichia coli: new insights provided by a
natural decomposition approach | 12 pages, 4 figures, 1 table | Genome Biology 9:R154 (2008) | 10.1186/gb-2008-9-10-r154 | null | q-bio.MN q-bio.GN | http://creativecommons.org/licenses/by/3.0/ | Background: Previous studies have used different methods in an effort to
extract the modular organization of transcriptional regulatory networks (TRNs).
However, these approaches are not natural, as they try to cluster highly
connected genes into a module or locate known pleiotropic transcription factors
in lower hierarchical layers. Here, we unravel the TRN of Escherichia coli by
separating it into its key elements, thus revealing its natural organization.
We also present a mathematical criterion, based on the topological features of
the TRN, to classify the network elements into one of two possible classes:
hierarchical or modular genes.
Results: We found that modular genes are clustered into physiologically
correlated groups validated by a statistical analysis of the enrichment of the
functional classes. Hierarchical genes encode transcription factors responsible
for coordinating module responses based on general interest signals.
Hierarchical elements correlate highly with the previously studied global
regulators, suggesting that this could be the first mathematical method to
identify global regulators. We identified a new element in TRNs never described
before: intermodular genes. These are structural genes that integrate, at the
promoter level, signals coming from different modules, and therefore from
different physiological responses. Using the concept of pleiotropy, we have
reconstructed the hierarchy of the network and discuss the role of feedforward
motifs in shaping the hierarchical backbone of the TRN.
Conclusions: This study sheds new light on the design principles underpinning
the organization of TRNs, showing a novel nonpyramidal architecture composed of
independent modules globally governed by hierarchical transcription factors,
whose responses are integrated by intermodular genes.
| [
{
"created": "Fri, 23 May 2014 01:20:15 GMT",
"version": "v1"
}
] | 2014-05-26 | [
[
"Freyre-González",
"Julio A.",
""
],
[
"Alonso-Pavón",
"José A.",
""
],
[
"Treviño-Quintanilla",
"Luis G.",
""
],
[
"Collado-Vides",
"Julio",
""
]
] | Background: Previous studies have used different methods in an effort to extract the modular organization of transcriptional regulatory networks (TRNs). However, these approaches are not natural, as they try to cluster highly connected genes into a module or locate known pleiotropic transcription factors in lower hierarchical layers. Here, we unravel the TRN of Escherichia coli by separating it into its key elements, thus revealing its natural organization. We also present a mathematical criterion, based on the topological features of the TRN, to classify the network elements into one of two possible classes: hierarchical or modular genes. Results: We found that modular genes are clustered into physiologically correlated groups validated by a statistical analysis of the enrichment of the functional classes. Hierarchical genes encode transcription factors responsible for coordinating module responses based on general interest signals. Hierarchical elements correlate highly with the previously studied global regulators, suggesting that this could be the first mathematical method to identify global regulators. We identified a new element in TRNs never described before: intermodular genes. These are structural genes that integrate, at the promoter level, signals coming from different modules, and therefore from different physiological responses. Using the concept of pleiotropy, we have reconstructed the hierarchy of the network and discuss the role of feedforward motifs in shaping the hierarchical backbone of the TRN. Conclusions: This study sheds new light on the design principles underpinning the organization of TRNs, showing a novel nonpyramidal architecture composed of independent modules globally governed by hierarchical transcription factors, whose responses are integrated by intermodular genes. |
1510.07296 | Emmanuel Quansah Mr | Suman Bhowmick, Emmanuel Quansah, Aladeen Basheer, Rana D. Parshad and
Ranjit Kumar Upadhyay | Predator interference effects on biological control: The "paradox" of
the generalist predator revisited | null | null | null | null | q-bio.PE math.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An interesting conundrum in biological control questions the efficiency of
generalist predators as biological control agents. Theory suggests, generalist
predators are poor agents for biological control, primarily due to mutual
interference. However field evidence shows they are actually quite effective in
regulating pest densities. In this work we provide a plausible answer to this
paradox. We analyze a three species model, where a generalist top predator is
introduced into an ecosystem as a biological control, to check the population
of a middle predator, that in turn is depredating on a prey species. We show
that the inclusion of predator interference alone, can cause the solution of
the top predator equation to blow-up in finite time, while there is global
existence in the no interference case. This result shows that interference
could actually cause a population explosion of the top predator, enabling it to
control the target species, thus corroborating recent field evidence. Our
results might also partially explain the population explosion of certain
species, introduced originally for biological control purposes, such as the
cane toad (\emph{Bufo marinus}) in Australia, which now functions as a
generalist top predator. We also show both Turing instability and
spatio-temporal chaos in the model. Lastly we investigate time delay effects.
| [
{
"created": "Sun, 25 Oct 2015 20:25:24 GMT",
"version": "v1"
}
] | 2015-10-27 | [
[
"Bhowmick",
"Suman",
""
],
[
"Quansah",
"Emmanuel",
""
],
[
"Basheer",
"Aladeen",
""
],
[
"Parshad",
"Rana D.",
""
],
[
"Upadhyay",
"Ranjit Kumar",
""
]
] | An interesting conundrum in biological control questions the efficiency of generalist predators as biological control agents. Theory suggests, generalist predators are poor agents for biological control, primarily due to mutual interference. However field evidence shows they are actually quite effective in regulating pest densities. In this work we provide a plausible answer to this paradox. We analyze a three species model, where a generalist top predator is introduced into an ecosystem as a biological control, to check the population of a middle predator, that in turn is depredating on a prey species. We show that the inclusion of predator interference alone, can cause the solution of the top predator equation to blow-up in finite time, while there is global existence in the no interference case. This result shows that interference could actually cause a population explosion of the top predator, enabling it to control the target species, thus corroborating recent field evidence. Our results might also partially explain the population explosion of certain species, introduced originally for biological control purposes, such as the cane toad (\emph{Bufo marinus}) in Australia, which now functions as a generalist top predator. We also show both Turing instability and spatio-temporal chaos in the model. Lastly we investigate time delay effects. |
2306.16173 | Sedigheh Behrouzifar | Sedigheh Behrouzifar | Identifying downregulated hub genes and key pathways in HBV-related
hepatocellular carcinoma using systems biology approach | 28 pages,5 figures, 4 tables | null | null | null | q-bio.MN | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Chronic Hepatitis B (CHB) is an independent risk factor for hepatocellular
carcinoma (HCC) initiation without cirrhosis occurrence. Apart from the
favorable effects of some antiviral drugs following tumor resection on the
survival of HCC patients, the use of these agents is essential lifelong. Thus,
designing the target-oriented therapeutic strategies to increase life
expectancy in HCC patients would be very important. The present study aimed to
identify downregulated hub genes and enriched pathways in HB-related HCC using
a systems biology-based approach. Microarray data of GSE121248 were downloaded
from gene expression omnibus (GEO) database. The differentially expressed genes
(DEGs) with the cut-off criteria of adjusted p < 0.05 and Log Fold-change (FC)
< -1.5 were selected. Then, the genes with the highest centrality were
detected. Finally, the prognostic values of the hub genes were assessed. Six
under-expressed hub genes with the highest interaction degree, Betweenness and
Eigenvector centrality were including IGF-1, PTGS2, PLG, HGF, ESR-1 and CYP2B6.
Among genes with high centrality, several genes including CYP2C9, ESR-1, CXCL2,
CYP2C8, IGF-1, CYP3A4, CYP2E1, CERPINE-1 and PXR were prognostic in HCC. The
important repressed pathways were including metabolic pathways and PI3K-Akt and
chemokine signaling pathways. The under-expression of several genes implicated
in metabolism, differentiation and chemotaxis might be a hallmark of the
progression of HCC that can be considered as diagnostic and therapeutic
targets.
| [
{
"created": "Wed, 28 Jun 2023 12:52:10 GMT",
"version": "v1"
}
] | 2023-06-29 | [
[
"Behrouzifar",
"Sedigheh",
""
]
] | Chronic Hepatitis B (CHB) is an independent risk factor for hepatocellular carcinoma (HCC) initiation without cirrhosis occurrence. Apart from the favorable effects of some antiviral drugs following tumor resection on the survival of HCC patients, the use of these agents is essential lifelong. Thus, designing the target-oriented therapeutic strategies to increase life expectancy in HCC patients would be very important. The present study aimed to identify downregulated hub genes and enriched pathways in HB-related HCC using a systems biology-based approach. Microarray data of GSE121248 were downloaded from gene expression omnibus (GEO) database. The differentially expressed genes (DEGs) with the cut-off criteria of adjusted p < 0.05 and Log Fold-change (FC) < -1.5 were selected. Then, the genes with the highest centrality were detected. Finally, the prognostic values of the hub genes were assessed. Six under-expressed hub genes with the highest interaction degree, Betweenness and Eigenvector centrality were including IGF-1, PTGS2, PLG, HGF, ESR-1 and CYP2B6. Among genes with high centrality, several genes including CYP2C9, ESR-1, CXCL2, CYP2C8, IGF-1, CYP3A4, CYP2E1, CERPINE-1 and PXR were prognostic in HCC. The important repressed pathways were including metabolic pathways and PI3K-Akt and chemokine signaling pathways. The under-expression of several genes implicated in metabolism, differentiation and chemotaxis might be a hallmark of the progression of HCC that can be considered as diagnostic and therapeutic targets. |
0806.2636 | Tibor Antal | Tibor Antal, Hisashi Ohtsuki, John Wakeley, Peter D. Taylor, Martin A.
Nowak | Evolutionary game dynamics in phenotype space | version 2: minor changes; equivalent to final published version | PNAS 106, 8597 (2009) | 10.1073/pnas.0902528106 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evolutionary dynamics can be studied in well-mixed or structured populations.
Population structure typically arises from the heterogeneous distribution of
individuals in physical space or on social networks. Here we introduce a new
type of space to evolutionary game dynamics: phenotype space. The population is
well-mixed in the sense that everyone is equally likely to interact with
everyone else, but the behavioral strategies depend on distance in phenotype
space. Individuals might behave differently towards those who look similar or
dissimilar. Individuals mutate to nearby phenotypes. We study the `phenotypic
space walk' of populations. We present analytic calculations that bring
together ideas from coalescence theory and evolutionary game dynamics. As a
particular example, we investigate the evolution of cooperation in phenotype
space. We obtain a precise condition for natural selection to favor cooperators
over defectors: for a one-dimensional phenotype space and large population size
the critical benefit-to-cost ratio is given by b/c=1+2/sqrt{3}. We derive the
fundamental condition for any evolutionary game and explore higher dimensional
phenotype spaces.
| [
{
"created": "Mon, 16 Jun 2008 18:17:03 GMT",
"version": "v1"
},
{
"created": "Sat, 2 May 2009 15:31:22 GMT",
"version": "v2"
}
] | 2009-06-04 | [
[
"Antal",
"Tibor",
""
],
[
"Ohtsuki",
"Hisashi",
""
],
[
"Wakeley",
"John",
""
],
[
"Taylor",
"Peter D.",
""
],
[
"Nowak",
"Martin A.",
""
]
] | Evolutionary dynamics can be studied in well-mixed or structured populations. Population structure typically arises from the heterogeneous distribution of individuals in physical space or on social networks. Here we introduce a new type of space to evolutionary game dynamics: phenotype space. The population is well-mixed in the sense that everyone is equally likely to interact with everyone else, but the behavioral strategies depend on distance in phenotype space. Individuals might behave differently towards those who look similar or dissimilar. Individuals mutate to nearby phenotypes. We study the `phenotypic space walk' of populations. We present analytic calculations that bring together ideas from coalescence theory and evolutionary game dynamics. As a particular example, we investigate the evolution of cooperation in phenotype space. We obtain a precise condition for natural selection to favor cooperators over defectors: for a one-dimensional phenotype space and large population size the critical benefit-to-cost ratio is given by b/c=1+2/sqrt{3}. We derive the fundamental condition for any evolutionary game and explore higher dimensional phenotype spaces. |
1805.09101 | Viktor Stojkoski MSc | Viktor Stojkoski, Zoran Utkovski, Elisabeth Andre, Ljupco Kocarev | The Role of Multiplex Network Structure in Cooperation through
Generalized Reciprocity | Part of this work was presented at the 17th International Conference
on Autonomous Agents and Multi-Agent Systems (AAMAS 2018) | null | 10.1016/j.physa.2019.121805 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent studies suggest that the emergence of cooperative behavior can be
explained by generalized reciprocity, a behavioral mechanism based on the
principle of "help anyone if helped by someone". In complex systems, the
cooperative dynamics is largely determined by the network structure which
dictates the interactions among neighboring individuals. These interactions
often exhibit multidimensional features, either as relationships of different
types or temporal dynamics, both of which may be modeled as a "multiplex"
network. Against this background, here we advance the research on cooperation
models inspired by generalized reciprocity by considering a multidimensional
networked society. Our results reveal that a multiplex network structure may
enhance the role of generalized reciprocity in promoting cooperation, whereby
some of the network dimensions act as a latent support for the others. As a
result, generalized reciprocity forces the cooperative contributions of the
individuals to concentrate in the dimension which is most favorable for the
existence of cooperation.
| [
{
"created": "Wed, 23 May 2018 12:57:37 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Feb 2019 14:19:31 GMT",
"version": "v2"
},
{
"created": "Thu, 13 Jun 2019 14:54:31 GMT",
"version": "v3"
}
] | 2019-09-04 | [
[
"Stojkoski",
"Viktor",
""
],
[
"Utkovski",
"Zoran",
""
],
[
"Andre",
"Elisabeth",
""
],
[
"Kocarev",
"Ljupco",
""
]
] | Recent studies suggest that the emergence of cooperative behavior can be explained by generalized reciprocity, a behavioral mechanism based on the principle of "help anyone if helped by someone". In complex systems, the cooperative dynamics is largely determined by the network structure which dictates the interactions among neighboring individuals. These interactions often exhibit multidimensional features, either as relationships of different types or temporal dynamics, both of which may be modeled as a "multiplex" network. Against this background, here we advance the research on cooperation models inspired by generalized reciprocity by considering a multidimensional networked society. Our results reveal that a multiplex network structure may enhance the role of generalized reciprocity in promoting cooperation, whereby some of the network dimensions act as a latent support for the others. As a result, generalized reciprocity forces the cooperative contributions of the individuals to concentrate in the dimension which is most favorable for the existence of cooperation. |
q-bio/0311022 | Antonio Lamura | Antonio Lamura, Massimo Ladisa, Giovanni Nico, Dritan Siliqi | Solvent content of protein crystals from diffraction intensities by
Independent Component Analysis | 9 pages, 2 figures, 1 table | Physica A: Statistical Mechanics and its Applications, Volume 349,
Issues 3-4, 15 April 2005, Pages 571-581 | 10.1016/j.physa.2004.11.005 | null | q-bio.QM | null | An analysis of the protein content of several crystal forms of proteins has
been performed. We apply a new numerical technique, the Independent Component
Analysis (ICA), to determine the volume fraction of the asymmetric unit
occupied by the protein. This technique requires only the crystallographic data
of structure factors as input.
| [
{
"created": "Mon, 17 Nov 2003 12:07:39 GMT",
"version": "v1"
}
] | 2008-12-02 | [
[
"Lamura",
"Antonio",
""
],
[
"Ladisa",
"Massimo",
""
],
[
"Nico",
"Giovanni",
""
],
[
"Siliqi",
"Dritan",
""
]
] | An analysis of the protein content of several crystal forms of proteins has been performed. We apply a new numerical technique, the Independent Component Analysis (ICA), to determine the volume fraction of the asymmetric unit occupied by the protein. This technique requires only the crystallographic data of structure factors as input. |
1901.02883 | Sudeepto Bhattacharya Dr | Shashankaditya Upadhyay and Sudeepto Bhattacharya | A spectral graph theoretic study of predator-prey networks | arXiv admin note: substantial text overlap with arXiv:1811.01935 | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predator-prey networks originating from different aqueous and terrestrial
environments are compared to assess if the difference in environments of these
networks produce any significant difference in the structure of such
predator-prey networks. Spectral graph theory is used firstly to discriminate
between the structure of such predator-prey networks originating from aqueous
and terrestrial environments and secondly to establish that the difference
observed in the structure of networks originating from these two environments
are precisely due to the way edges are oriented in these networks and are not a
property of random networks.We use random projections in $\mathbb{R^2}$ and
$\mathbb{R^3}$ of weighted spectral distribution (WSD) of the networks
belonging to the two classes viz. aqueous and terrestrial to differentiate
between the structure of these networks. The spectral theory of graph
non-randomness and relative non-randomness is used to establish the deviation
of structure of these networks from having a topology similar to random
networks.We thus establish the absence of a universal structural pattern across
predator-prey networks originating from different environments.
| [
{
"created": "Wed, 9 Jan 2019 11:39:15 GMT",
"version": "v1"
}
] | 2019-01-11 | [
[
"Upadhyay",
"Shashankaditya",
""
],
[
"Bhattacharya",
"Sudeepto",
""
]
] | Predator-prey networks originating from different aqueous and terrestrial environments are compared to assess if the difference in environments of these networks produce any significant difference in the structure of such predator-prey networks. Spectral graph theory is used firstly to discriminate between the structure of such predator-prey networks originating from aqueous and terrestrial environments and secondly to establish that the difference observed in the structure of networks originating from these two environments are precisely due to the way edges are oriented in these networks and are not a property of random networks.We use random projections in $\mathbb{R^2}$ and $\mathbb{R^3}$ of weighted spectral distribution (WSD) of the networks belonging to the two classes viz. aqueous and terrestrial to differentiate between the structure of these networks. The spectral theory of graph non-randomness and relative non-randomness is used to establish the deviation of structure of these networks from having a topology similar to random networks.We thus establish the absence of a universal structural pattern across predator-prey networks originating from different environments. |
2208.01439 | Rohitash Chandra | Rohitash Chandra, Chaarvi Bansal, Mingyue Kang, Tom Blau, Vinti
Agarwal, Pranjal Singh, Laurence O. W. Wilson, Seshadri Vasan | Unsupervised machine learning framework for discriminating major
variants of concern during COVID-19 | null | PLOS ONE, 2023 | 10.1371/journal.pone.0285719 | null | q-bio.OT cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Due to the high mutation rate of the virus, the COVID-19 pandemic evolved
rapidly. Certain variants of the virus, such as Delta and Omicron, emerged with
altered viral properties leading to severe transmission and death rates. These
variants burdened the medical systems worldwide with a major impact to travel,
productivity, and the world economy. Unsupervised machine learning methods have
the ability to compress, characterize, and visualize unlabelled data. This
paper presents a framework that utilizes unsupervised machine learning methods
to discriminate and visualize the associations between major COVID-19 variants
based on their genome sequences. These methods comprise a combination of
selected dimensionality reduction and clustering techniques. The framework
processes the RNA sequences by performing a k-mer analysis on the data and
further visualises and compares the results using selected dimensionality
reduction methods that include principal component analysis (PCA),
t-distributed stochastic neighbour embedding (t-SNE), and uniform manifold
approximation projection (UMAP). Our framework also employs agglomerative
hierarchical clustering to visualize the mutational differences among major
variants of concern and country-wise mutational differences for selected
variants (Delta and Omicron) using dendrograms. We also provide country-wise
mutational differences for selected variants via dendrograms. We find that the
proposed framework can effectively distinguish between the major variants and
has the potential to identify emerging variants in the future.
| [
{
"created": "Mon, 1 Aug 2022 13:02:28 GMT",
"version": "v1"
},
{
"created": "Sun, 2 Oct 2022 01:59:58 GMT",
"version": "v2"
},
{
"created": "Thu, 25 May 2023 22:28:28 GMT",
"version": "v3"
}
] | 2023-05-29 | [
[
"Chandra",
"Rohitash",
""
],
[
"Bansal",
"Chaarvi",
""
],
[
"Kang",
"Mingyue",
""
],
[
"Blau",
"Tom",
""
],
[
"Agarwal",
"Vinti",
""
],
[
"Singh",
"Pranjal",
""
],
[
"Wilson",
"Laurence O. W.",
""
],
[
"Vasan",
"Seshadri",
""
]
] | Due to the high mutation rate of the virus, the COVID-19 pandemic evolved rapidly. Certain variants of the virus, such as Delta and Omicron, emerged with altered viral properties leading to severe transmission and death rates. These variants burdened the medical systems worldwide with a major impact to travel, productivity, and the world economy. Unsupervised machine learning methods have the ability to compress, characterize, and visualize unlabelled data. This paper presents a framework that utilizes unsupervised machine learning methods to discriminate and visualize the associations between major COVID-19 variants based on their genome sequences. These methods comprise a combination of selected dimensionality reduction and clustering techniques. The framework processes the RNA sequences by performing a k-mer analysis on the data and further visualises and compares the results using selected dimensionality reduction methods that include principal component analysis (PCA), t-distributed stochastic neighbour embedding (t-SNE), and uniform manifold approximation projection (UMAP). Our framework also employs agglomerative hierarchical clustering to visualize the mutational differences among major variants of concern and country-wise mutational differences for selected variants (Delta and Omicron) using dendrograms. We also provide country-wise mutational differences for selected variants via dendrograms. We find that the proposed framework can effectively distinguish between the major variants and has the potential to identify emerging variants in the future. |
1308.5778 | Manoj Gopalakrishnan | Jemseena V. and Manoj Gopalakrishnan (Department of Physics, IIT
Madras) | Microtubule catastrophe from protofilament dynamics | 17 pages, minor changes in text and fig3, accepted version in Phys.
Rev. E | Phys. Rev. E 88, 032717 (2013) | 10.1103/PhysRevE.88.032717 | null | q-bio.SC cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The disappearance of the guanosine triphosphate (GTP)-tubulin cap is widely
believed to be the forerunner event for the growth-shrinkage transition
(`catastrophe') in microtubule filaments in eukaryotic cells. We study a
discrete version of a stochastic model of the GTP cap dynamics, originally
proposed by Flyvbjerg, Holy and Leibler (Flyvbjerg, Holy and Leibler, Phys.
Rev. Lett. 73, 2372, 1994). Our model includes both spontaneous and vectorial
hydrolysis, as well as dissociation of a non-hydrolyzed dimer from the filament
after incorporation. In the first part of the paper, we apply this model to a
single protofilament of a microtubule. A catastrophe transition is defined for
each protofilament, similar to the earlier one-dimensional models, the
frequency of occurrence of which is then calculated under various conditions,
but without explicit assumption of steady state conditions. Using a
perturbative approach, we show that the leading asymptotic behavior of the
protofilament catastrophe in the limit of large growth velocities is remarkably
similar across different models. In the second part of the paper, we extend our
analysis to the entire filament by making a conjecture that a minimum number of
such transitions are required to occur for the onset of microtubule
catastrophe. The frequency of microtubule catastrophe is then determined using
numerical simulations, and compared with analytical/semi-analytical estimates
made under steady state/quasi-steady state assumptions respectively for the
protofilament dynamics. A few relevant experimental results are analyzed in
detail, and compared with predictions from the model. Our results indicate that
loss of GTP cap in 2-3 protofilaments is necessary to trigger catastrophe in a
microtubule.
| [
{
"created": "Tue, 27 Aug 2013 07:34:59 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Sep 2013 04:26:28 GMT",
"version": "v2"
}
] | 2015-06-17 | [
[
"V.",
"Jemseena",
"",
"Department of Physics, IIT\n Madras"
],
[
"Gopalakrishnan",
"Manoj",
"",
"Department of Physics, IIT\n Madras"
]
] | The disappearance of the guanosine triphosphate (GTP)-tubulin cap is widely believed to be the forerunner event for the growth-shrinkage transition (`catastrophe') in microtubule filaments in eukaryotic cells. We study a discrete version of a stochastic model of the GTP cap dynamics, originally proposed by Flyvbjerg, Holy and Leibler (Flyvbjerg, Holy and Leibler, Phys. Rev. Lett. 73, 2372, 1994). Our model includes both spontaneous and vectorial hydrolysis, as well as dissociation of a non-hydrolyzed dimer from the filament after incorporation. In the first part of the paper, we apply this model to a single protofilament of a microtubule. A catastrophe transition is defined for each protofilament, similar to the earlier one-dimensional models, the frequency of occurrence of which is then calculated under various conditions, but without explicit assumption of steady state conditions. Using a perturbative approach, we show that the leading asymptotic behavior of the protofilament catastrophe in the limit of large growth velocities is remarkably similar across different models. In the second part of the paper, we extend our analysis to the entire filament by making a conjecture that a minimum number of such transitions are required to occur for the onset of microtubule catastrophe. The frequency of microtubule catastrophe is then determined using numerical simulations, and compared with analytical/semi-analytical estimates made under steady state/quasi-steady state assumptions respectively for the protofilament dynamics. A few relevant experimental results are analyzed in detail, and compared with predictions from the model. Our results indicate that loss of GTP cap in 2-3 protofilaments is necessary to trigger catastrophe in a microtubule. |
2003.12454 | Ezequiel Alvarez | Ezequiel Alvarez (ICAS, Argentina), Federico Lamagna (CAB, Argentina)
and Manuel Szewc (ICAS, Argentina) | A Machine Learning alternative to placebo-controlled clinical trials
upon new diseases: A primer | Work originally aimed for COVID-19 outbreak. All scripts are open
source in github.com/ManuelSzewc/ML4DT/ | null | null | ICAS 048 | q-bio.QM cs.LG q-bio.PE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The appearance of a new dangerous and contagious disease requires the
development of a drug therapy faster than what is foreseen by usual mechanisms.
Many drug therapy developments consist in investigating through different
clinical trials the effects of different specific drug combinations by
delivering it into a test group of ill patients, meanwhile a placebo treatment
is delivered to the remaining ill patients, known as the control group. We
compare the above technique to a new technique in which all patients receive a
different and reasonable combination of drugs and use this outcome to feed a
Neural Network. By averaging out fluctuations and recognizing different patient
features, the Neural Network learns the pattern that connects the patients
initial state to the outcome of the treatments and therefore can predict the
best drug therapy better than the above method. In contrast to many available
works, we do not study any detail of drugs composition nor interaction, but
instead pose and solve the problem from a phenomenological point of view, which
allows us to compare both methods. Although the conclusion is reached through
mathematical modeling and is stable upon any reasonable model, this is a
proof-of-concept that should be studied within other expertises before
confronting a real scenario. All calculations, tools and scripts have been made
open source for the community to test, modify or expand it. Finally it should
be mentioned that, although the results presented here are in the context of a
new disease in medical sciences, these are useful for any field that requires a
experimental technique with a control group.
| [
{
"created": "Thu, 26 Mar 2020 17:53:10 GMT",
"version": "v1"
}
] | 2020-03-31 | [
[
"Alvarez",
"Ezequiel",
"",
"ICAS, Argentina"
],
[
"Lamagna",
"Federico",
"",
"CAB, Argentina"
],
[
"Szewc",
"Manuel",
"",
"ICAS, Argentina"
]
] | The appearance of a new dangerous and contagious disease requires the development of a drug therapy faster than what is foreseen by usual mechanisms. Many drug therapy developments consist in investigating through different clinical trials the effects of different specific drug combinations by delivering it into a test group of ill patients, meanwhile a placebo treatment is delivered to the remaining ill patients, known as the control group. We compare the above technique to a new technique in which all patients receive a different and reasonable combination of drugs and use this outcome to feed a Neural Network. By averaging out fluctuations and recognizing different patient features, the Neural Network learns the pattern that connects the patients initial state to the outcome of the treatments and therefore can predict the best drug therapy better than the above method. In contrast to many available works, we do not study any detail of drugs composition nor interaction, but instead pose and solve the problem from a phenomenological point of view, which allows us to compare both methods. Although the conclusion is reached through mathematical modeling and is stable upon any reasonable model, this is a proof-of-concept that should be studied within other expertises before confronting a real scenario. All calculations, tools and scripts have been made open source for the community to test, modify or expand it. Finally it should be mentioned that, although the results presented here are in the context of a new disease in medical sciences, these are useful for any field that requires a experimental technique with a control group. |
2008.00053 | Jacob Adamczyk | Jacob Adamczyk | Neural Network Degeneration and its Relationship to the Brain | null | null | null | null | q-bio.NC cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This report discusses the application of neural networks (NNs) as small
segments of the brain. The networks representing the biological connectome are
altered both spatially and temporally. The degradation techniques applied here
are "weight degradation", "weight scrambling", and variable activation
function. These methods aim to shine light on the study of neurodegenerative
diseases such as Alzheimer's, Huntington's and Parkinson's disease as well as
strokes and brain tumors disrupting the flow of information in the brain's
network. Fundamental insights to memory loss and generalized learning
dysfunction are gained by monitoring the network's error function during
network degradation. The biological significance of each facet is also
discussed.
| [
{
"created": "Fri, 31 Jul 2020 19:42:23 GMT",
"version": "v1"
}
] | 2020-08-04 | [
[
"Adamczyk",
"Jacob",
""
]
] | This report discusses the application of neural networks (NNs) as small segments of the brain. The networks representing the biological connectome are altered both spatially and temporally. The degradation techniques applied here are "weight degradation", "weight scrambling", and variable activation function. These methods aim to shine light on the study of neurodegenerative diseases such as Alzheimer's, Huntington's and Parkinson's disease as well as strokes and brain tumors disrupting the flow of information in the brain's network. Fundamental insights to memory loss and generalized learning dysfunction are gained by monitoring the network's error function during network degradation. The biological significance of each facet is also discussed. |
2209.09122 | Khayrul Islam | Khayrul Islam, Meghdad Razizadeh and Yaling Liu | Coarse-Grained Molecular Simulation of Extracellular Vesicles Squeezing
for Drug Loading | null | https://pubs.rsc.org/en/content/articlelanding/2023/cp/d3cp00387f | 10.1039/D3CP00387F | null | q-bio.BM | http://creativecommons.org/licenses/by-sa/4.0/ | In recent years, extracellular vesicles such have become promising carriers
as the next-generation drug delivery platforms. Effective loading of exogenous
cargos without compromising the extracellular vesicle membrane is a major
challenge. Rapid squeezing through nanofluidic channels is a widely used
approach to load exogenous cargoes into the EV through the nanopores generated
temporarily on the membrane. However, the exact mechanism and dynamics of
nanopores opening, as well as cargo loading through nanopores during the
squeezing process remains unknown and is impossible to be visualized or
quantified experimentally due to the small size of the EV and the fast
transient process. This paper developed a systemic algorithm to simulate
nanopore formation and predict drug loading during extracellular vesicle (EV)
squeezing by leveraging the power of coarse-grain (CG) molecular dynamics
simulations with fluid dynamics. The EV CG beads are coupled with implicit
Fluctuating Lattice Boltzmann solvent. Effects of EV property and various
squeezing test parameters, such as EV size, flow velocity, channel width, and
length, on pore formation and drug loading efficiency are analyzed. Based on
the simulation results, a phase diagram is provided as a design guidance for
nanochannel geometry and squeezing velocity to generate pores on membrane
without damaging the EV. This method can be utilized to optimize the
nanofluidic device configuration and flow setup to obtain desired drug loading
into EVs
| [
{
"created": "Mon, 19 Sep 2022 15:42:32 GMT",
"version": "v1"
},
{
"created": "Sun, 26 Mar 2023 17:16:20 GMT",
"version": "v2"
},
{
"created": "Thu, 13 Apr 2023 17:12:50 GMT",
"version": "v3"
}
] | 2023-04-14 | [
[
"Islam",
"Khayrul",
""
],
[
"Razizadeh",
"Meghdad",
""
],
[
"Liu",
"Yaling",
""
]
] | In recent years, extracellular vesicles such have become promising carriers as the next-generation drug delivery platforms. Effective loading of exogenous cargos without compromising the extracellular vesicle membrane is a major challenge. Rapid squeezing through nanofluidic channels is a widely used approach to load exogenous cargoes into the EV through the nanopores generated temporarily on the membrane. However, the exact mechanism and dynamics of nanopores opening, as well as cargo loading through nanopores during the squeezing process remains unknown and is impossible to be visualized or quantified experimentally due to the small size of the EV and the fast transient process. This paper developed a systemic algorithm to simulate nanopore formation and predict drug loading during extracellular vesicle (EV) squeezing by leveraging the power of coarse-grain (CG) molecular dynamics simulations with fluid dynamics. The EV CG beads are coupled with implicit Fluctuating Lattice Boltzmann solvent. Effects of EV property and various squeezing test parameters, such as EV size, flow velocity, channel width, and length, on pore formation and drug loading efficiency are analyzed. Based on the simulation results, a phase diagram is provided as a design guidance for nanochannel geometry and squeezing velocity to generate pores on membrane without damaging the EV. This method can be utilized to optimize the nanofluidic device configuration and flow setup to obtain desired drug loading into EVs |
1310.3217 | Jesus M Cortes | Ver\'onica M\"aki-Marttunen, Ibai Diez, Jesus M. Cortes, Dante R.
Chialvo and Mirta Villarreal | Disruption of transfer entropy and inter-hemispheric brain functional
connectivity in patients with disorder of consciousness | 25 pages; 4 figures; 3 tables; 1 supplementary figure; 4
supplementary tables; accepted for publication in Frontiers in
Neuroinformatics | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Severe traumatic brain injury can lead to disorders of consciousness (DOC)
characterized by deficit in conscious awareness and cognitive impairment
including coma, vegetative state, minimally consciousness, and lock-in
syndrome. Of crucial importance is to find objective markers that can account
for the large-scale disturbances of brain function to help the diagnosis and
prognosis of DOC patients and eventually the prediction of the coma outcome.
Following recent studies suggesting that the functional organization of brain
networks can be altered in comatose patients, this work analyzes brain
functional connectivity (FC) networks obtained from resting-state functional
magnetic resonance imaging (rs-fMRI). Two approaches are used to estimate the
FC: the Partial Correlation (PC) and the Transfer Entropy (TE). Both the PC and
the TE show significant statistical differences between the group of patients
and control subjects; in brief, the inter-hemispheric PC and the
intra-hemispheric TE account for such differences. Overall, these results
suggest two possible rs-fMRI markers useful to design new strategies for the
management and neuropsychological rehabilitation of DOC patients.
| [
{
"created": "Fri, 11 Oct 2013 17:39:55 GMT",
"version": "v1"
}
] | 2013-10-14 | [
[
"Mäki-Marttunen",
"Verónica",
""
],
[
"Diez",
"Ibai",
""
],
[
"Cortes",
"Jesus M.",
""
],
[
"Chialvo",
"Dante R.",
""
],
[
"Villarreal",
"Mirta",
""
]
] | Severe traumatic brain injury can lead to disorders of consciousness (DOC) characterized by deficit in conscious awareness and cognitive impairment including coma, vegetative state, minimally consciousness, and lock-in syndrome. Of crucial importance is to find objective markers that can account for the large-scale disturbances of brain function to help the diagnosis and prognosis of DOC patients and eventually the prediction of the coma outcome. Following recent studies suggesting that the functional organization of brain networks can be altered in comatose patients, this work analyzes brain functional connectivity (FC) networks obtained from resting-state functional magnetic resonance imaging (rs-fMRI). Two approaches are used to estimate the FC: the Partial Correlation (PC) and the Transfer Entropy (TE). Both the PC and the TE show significant statistical differences between the group of patients and control subjects; in brief, the inter-hemispheric PC and the intra-hemispheric TE account for such differences. Overall, these results suggest two possible rs-fMRI markers useful to design new strategies for the management and neuropsychological rehabilitation of DOC patients. |
2301.02660 | Yuxin Yang | Ruyi Qu, Qiuji Yang, Yingying Bi, Jiajing Cheng, Mengna He, Xin Wei,
Yiqi Yuan, Yuxin Yang and Jinlong Qin | Decreased serum vitamin D level as a prognostic marker in patients with
COVID-19 | null | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | Background: The corona virus disease 2019 (COVID-19) pandemic, which is
caused by severe acute respiratory syndrome coronavirus 2, is still localized
outbreak and has resulted in a high rate of infection and severe disease in
older patients with comorbidities. The vitamin D status of the population has
been found to be an important factor that could influence outcome of COVID-19.
However, whether vitamin D can lessen the symptoms or severity of COVID-19
still remains controversial. Methods: A total of 719 patients with confirmed
COVID-19 were enrolled retrospectively in this study from April 13 to June 6,
2022 at Shanghai Forth People's Hospital. The circulating levels of 25(OH)D3,
inflammatory factors, and clinical parameters were assayed. Time to viral RNA
clearance (TVRC), classification and prognosis of COVID-19 were used to
evaluate the severity of COVID-19 infection. Results: The median age was 76
years (interquartile range, IQR, 64.5-84.6), 44.1% of patients were male, and
the TVRC was 11 days (IQR, 7-16) in this population. The median level of
25(OH)D3 was 27.15 (IQR, 19.31-38.89) nmol/L. Patients with lower serum
25(OH)D3 had prolonged time to viral clearance, more obvious inflammatory
response, more severe respiratory symptoms and higher risks of impaired hepatic
and renal function. Multiple regression analyses revealed that serum 25(OH)D3
level was negatively associated with TVRC independently. ROC curve showed the
serum vitamin D level could predict the severity classification and prognosis
of COVID-19 significantly.Conclusions: Serum 25(OH)D3 level is independently
associated with the severity of COVID-19 in elderly, and it could be used as a
predictor of the severity of COVID-19. In addition, supplementation with
vitamin D might provide beneficial effects in old patients with COVID-19.
| [
{
"created": "Sun, 25 Dec 2022 16:16:09 GMT",
"version": "v1"
}
] | 2023-01-10 | [
[
"Qu",
"Ruyi",
""
],
[
"Yang",
"Qiuji",
""
],
[
"Bi",
"Yingying",
""
],
[
"Cheng",
"Jiajing",
""
],
[
"He",
"Mengna",
""
],
[
"Wei",
"Xin",
""
],
[
"Yuan",
"Yiqi",
""
],
[
"Yang",
"Yuxin",
""
],
[
"Qin",
"Jinlong",
""
]
] | Background: The corona virus disease 2019 (COVID-19) pandemic, which is caused by severe acute respiratory syndrome coronavirus 2, is still localized outbreak and has resulted in a high rate of infection and severe disease in older patients with comorbidities. The vitamin D status of the population has been found to be an important factor that could influence outcome of COVID-19. However, whether vitamin D can lessen the symptoms or severity of COVID-19 still remains controversial. Methods: A total of 719 patients with confirmed COVID-19 were enrolled retrospectively in this study from April 13 to June 6, 2022 at Shanghai Forth People's Hospital. The circulating levels of 25(OH)D3, inflammatory factors, and clinical parameters were assayed. Time to viral RNA clearance (TVRC), classification and prognosis of COVID-19 were used to evaluate the severity of COVID-19 infection. Results: The median age was 76 years (interquartile range, IQR, 64.5-84.6), 44.1% of patients were male, and the TVRC was 11 days (IQR, 7-16) in this population. The median level of 25(OH)D3 was 27.15 (IQR, 19.31-38.89) nmol/L. Patients with lower serum 25(OH)D3 had prolonged time to viral clearance, more obvious inflammatory response, more severe respiratory symptoms and higher risks of impaired hepatic and renal function. Multiple regression analyses revealed that serum 25(OH)D3 level was negatively associated with TVRC independently. ROC curve showed the serum vitamin D level could predict the severity classification and prognosis of COVID-19 significantly.Conclusions: Serum 25(OH)D3 level is independently associated with the severity of COVID-19 in elderly, and it could be used as a predictor of the severity of COVID-19. In addition, supplementation with vitamin D might provide beneficial effects in old patients with COVID-19. |
2406.11121 | Li Chen | Anhui Sheng, Jing Zhang, Guozhong Zheng, Jiqiang Zhang, Weiran Cai,
and Li Chen | Catalytic evolution of cooperation in a population with behavioural
bimodality | 11 pages, 12 figure. Comments are appreciated | null | null | null | q-bio.PE cond-mat.dis-nn nlin.AO | http://creativecommons.org/licenses/by/4.0/ | The remarkable adaptability of humans in response to complex environments is
often demonstrated by the context-dependent adoption of different behavioral
modes. However, the existing game-theoretic studies mostly focus on the
single-mode assumption, and the impact of this behavioral multimodality on the
evolution of cooperation remains largely unknown. Here, we study how
cooperation evolves in a population with two behavioral modes. Specifically, we
incorporate Q-learning and Tit-for-Tat (TFT) rules into our toy model, where
prisoner's dilemma game is played and we investigate the impact of the mode
mixture on the evolution of cooperation. While players in Q-learning mode aim
to maximize their accumulated payoffs, players within TFT mode repeat what
their neighbors have done to them. In a structured mixing implementation where
the updating rule is fixed for each individual, we find that the mode mixture
greatly promotes the overall cooperation prevalence. The promotion is even more
significant in the probabilistic mixing, where players randomly select one of
the two rules at each step. Finally, this promotion is robust when players are
allowed to adaptively choose the two modes by real-time comparison. In all
three scenarios, players within the Q-learning mode act as catalyzer that turns
the TFT players to be more cooperative, and as a result drive the whole
population to be highly cooperative. The analysis of Q-tables explains the
underlying mechanism of cooperation promotion, which captures the ``psychologic
evolution" in the players' mind. Our study indicates that the variety of
behavioral modes is non-negligible, and could be crucial to clarify the
emergence of cooperation in the real world.
| [
{
"created": "Mon, 17 Jun 2024 00:44:27 GMT",
"version": "v1"
}
] | 2024-06-18 | [
[
"Sheng",
"Anhui",
""
],
[
"Zhang",
"Jing",
""
],
[
"Zheng",
"Guozhong",
""
],
[
"Zhang",
"Jiqiang",
""
],
[
"Cai",
"Weiran",
""
],
[
"Chen",
"Li",
""
]
] | The remarkable adaptability of humans in response to complex environments is often demonstrated by the context-dependent adoption of different behavioral modes. However, the existing game-theoretic studies mostly focus on the single-mode assumption, and the impact of this behavioral multimodality on the evolution of cooperation remains largely unknown. Here, we study how cooperation evolves in a population with two behavioral modes. Specifically, we incorporate Q-learning and Tit-for-Tat (TFT) rules into our toy model, where prisoner's dilemma game is played and we investigate the impact of the mode mixture on the evolution of cooperation. While players in Q-learning mode aim to maximize their accumulated payoffs, players within TFT mode repeat what their neighbors have done to them. In a structured mixing implementation where the updating rule is fixed for each individual, we find that the mode mixture greatly promotes the overall cooperation prevalence. The promotion is even more significant in the probabilistic mixing, where players randomly select one of the two rules at each step. Finally, this promotion is robust when players are allowed to adaptively choose the two modes by real-time comparison. In all three scenarios, players within the Q-learning mode act as catalyzer that turns the TFT players to be more cooperative, and as a result drive the whole population to be highly cooperative. The analysis of Q-tables explains the underlying mechanism of cooperation promotion, which captures the ``psychologic evolution" in the players' mind. Our study indicates that the variety of behavioral modes is non-negligible, and could be crucial to clarify the emergence of cooperation in the real world. |
1710.04195 | Philip Greulich | Philip Greulich, Benjamin D. Simons | Extreme value statistics of mutation accumulation in renewing cell
populations | 5 pages, 4 figures, under review at Physical Review Letters | Phys. Rev. E 98, 050401 (2018) | 10.1103/PhysRevE.98.050401 | null | q-bio.TO physics.bio-ph physics.data-an q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The emergence of a predominant phenotype within a cell population is often
triggered by a rare accumulation of DNA mutations in a single cell. For
example, tumors may be initiated by a single cell in which multiple mutations
cooperate to bypass a cell's defense mechanisms. The risk of such an event is
thus determined by the extremal accumulation of mutations across tissue cells.
To address this risk, we study the statistics of the maximum mutation numbers
in a generic, but tested, model of a renewing cell population. By drawing an
analogy between the genealogy of a cell population and the theory of branching
random walks, we obtain analytical estimates for the probability of exceeding a
threshold number of mutations and determine how the statistical distribution of
maximum mutation numbers scales with age and cell population size.
| [
{
"created": "Wed, 11 Oct 2017 17:44:02 GMT",
"version": "v1"
}
] | 2018-11-21 | [
[
"Greulich",
"Philip",
""
],
[
"Simons",
"Benjamin D.",
""
]
] | The emergence of a predominant phenotype within a cell population is often triggered by a rare accumulation of DNA mutations in a single cell. For example, tumors may be initiated by a single cell in which multiple mutations cooperate to bypass a cell's defense mechanisms. The risk of such an event is thus determined by the extremal accumulation of mutations across tissue cells. To address this risk, we study the statistics of the maximum mutation numbers in a generic, but tested, model of a renewing cell population. By drawing an analogy between the genealogy of a cell population and the theory of branching random walks, we obtain analytical estimates for the probability of exceeding a threshold number of mutations and determine how the statistical distribution of maximum mutation numbers scales with age and cell population size. |
1809.06221 | Mustafa Radha | Mustafa Radha, Pedro Fonseca, Marco Ross, Andreas Cerny, Peter
Anderer, Ronald M. Aarts | LSTM knowledge transfer for HRV-based sleep staging | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated sleep stage classification using heart-rate variability is an
active field of research. In this work limitations of the current
state-of-the-art are addressed through the use of deep learning techniques and
their efficacy is demonstrated. First, a temporal model is proposed for the
inference of sleep stages from electrocardiography using a deep long- and
short-term (LSTM) classifier and it is shown that this model outperforms
previous approaches which were often limited to non-temporal or Markovian
classifiers on a comprehensive benchmark data set (292 participants, 541214
samples) comprising a wide range of ages and pathological profiles, achieving a
Cohen's $\kappa$ of $0.61\pm0.16$ and accuracy of $76.30\pm10.17$ annotated
according to the Rechtschaffen & Kales annotation standard.
Subsequently, it is demonstrated how knowledge learned on this large
benchmark data set can be re-used through transfer learning for the
classification of photoplethysmography (PPG) data. This is done using a smaller
data set (60 participants, 91479 samples) that is annotated with the more
recent American Association of Sleep Medicine annotation standard, achieving a
Cohen's $\kappa$ of $0.63\pm0.13$ and accuracy of $74.65\pm8.63$ for
wrist-mounted PPG-based sleep stage classification, higher than any previously
reported performance using this sensor modality. This demonstrates the
feasibility of knowledge transfer in sleep staging to adapt models for new
sensor modalities as well as different annotation strategies.
| [
{
"created": "Wed, 12 Sep 2018 10:01:38 GMT",
"version": "v1"
}
] | 2018-09-18 | [
[
"Radha",
"Mustafa",
""
],
[
"Fonseca",
"Pedro",
""
],
[
"Ross",
"Marco",
""
],
[
"Cerny",
"Andreas",
""
],
[
"Anderer",
"Peter",
""
],
[
"Aarts",
"Ronald M.",
""
]
] | Automated sleep stage classification using heart-rate variability is an active field of research. In this work limitations of the current state-of-the-art are addressed through the use of deep learning techniques and their efficacy is demonstrated. First, a temporal model is proposed for the inference of sleep stages from electrocardiography using a deep long- and short-term (LSTM) classifier and it is shown that this model outperforms previous approaches which were often limited to non-temporal or Markovian classifiers on a comprehensive benchmark data set (292 participants, 541214 samples) comprising a wide range of ages and pathological profiles, achieving a Cohen's $\kappa$ of $0.61\pm0.16$ and accuracy of $76.30\pm10.17$ annotated according to the Rechtschaffen & Kales annotation standard. Subsequently, it is demonstrated how knowledge learned on this large benchmark data set can be re-used through transfer learning for the classification of photoplethysmography (PPG) data. This is done using a smaller data set (60 participants, 91479 samples) that is annotated with the more recent American Association of Sleep Medicine annotation standard, achieving a Cohen's $\kappa$ of $0.63\pm0.13$ and accuracy of $74.65\pm8.63$ for wrist-mounted PPG-based sleep stage classification, higher than any previously reported performance using this sensor modality. This demonstrates the feasibility of knowledge transfer in sleep staging to adapt models for new sensor modalities as well as different annotation strategies. |
2008.05263 | Adam Svahn | Adam J. Svahn and Mikhail Prokopenko | An Ansatz for computational undecidability in RNA automata | null | null | null | null | q-bio.QM cs.FL q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this Ansatz we consider theoretical constructions of RNA polymers into
automata, a form of computational structure. The basis for transitions in our
automata are plausible RNA enzymes that may perform ligation or cleavage.
Limited to these operations, we construct RNA automata of increasing
complexity; from the Finite Automaton (RNA-FA) to the Turing Machine equivalent
2-stack PDA (RNA-2PDA) and the universal RNA-UPDA. For each automaton we show
how the enzymatic reactions match the logical operations of the RNA automaton.
A critical theme of the Ansatz is the self-reference in RNA automata
configurations which exploits the program-data duality but results in
computational undecidability. We describe how computational undecidability is
exemplified in the self-referential Liar paradox that places a boundary on a
logical system, and by construction, any RNA automata. We argue that an
expansion of the evolutionary space for RNA-2PDA automata can be interpreted as
a hierarchical resolution of computational undecidability by a meta-system
(akin to Turing's oracle), in a continual process analogous to Turing's ordinal
logics and Post's extensible recursively generated logics. On this basis, we
put forward the hypothesis that the resolution of undecidable configurations in
RNA automata represent a novelty generation mechanism and propose avenues for
future investigation of biological automata.
| [
{
"created": "Wed, 12 Aug 2020 12:16:07 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Sep 2020 12:22:25 GMT",
"version": "v2"
},
{
"created": "Wed, 18 Aug 2021 06:08:32 GMT",
"version": "v3"
},
{
"created": "Mon, 23 May 2022 06:54:04 GMT",
"version": "v4"
}
] | 2022-05-24 | [
[
"Svahn",
"Adam J.",
""
],
[
"Prokopenko",
"Mikhail",
""
]
] | In this Ansatz we consider theoretical constructions of RNA polymers into automata, a form of computational structure. The basis for transitions in our automata are plausible RNA enzymes that may perform ligation or cleavage. Limited to these operations, we construct RNA automata of increasing complexity; from the Finite Automaton (RNA-FA) to the Turing Machine equivalent 2-stack PDA (RNA-2PDA) and the universal RNA-UPDA. For each automaton we show how the enzymatic reactions match the logical operations of the RNA automaton. A critical theme of the Ansatz is the self-reference in RNA automata configurations which exploits the program-data duality but results in computational undecidability. We describe how computational undecidability is exemplified in the self-referential Liar paradox that places a boundary on a logical system, and by construction, any RNA automata. We argue that an expansion of the evolutionary space for RNA-2PDA automata can be interpreted as a hierarchical resolution of computational undecidability by a meta-system (akin to Turing's oracle), in a continual process analogous to Turing's ordinal logics and Post's extensible recursively generated logics. On this basis, we put forward the hypothesis that the resolution of undecidable configurations in RNA automata represent a novelty generation mechanism and propose avenues for future investigation of biological automata. |
1505.06603 | Alexander K. Vidybida | A. K. Vidybida | Simulating leaky integrate and fire neuron with integers | 11 pages. The testing program used is included as ancillary material | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The leaky integrate and fire (LIF) neuron represents standard neuronal model
used for numerical simulations. The leakage is implemented in the model as
exponential decay of trans-membrane voltage towards its resting value. This
makes inevitable the usage of machine floating point numbers in the course of
simulation. It is known that machine floating point arithmetic is subjected to
small inaccuracies, which prevent from exact comparison of floating point
quantities. In particular, it is incorrect to decide whether two separate in
time states of a simulated system composed of LIF neurons are exactly
identical. However, decision of this type is necessary, e.g. to figure periodic
dynamical regimes in a reverberating network. Here we offer a simulation
paradigm of a LIF neuron, in which neuronal states are described by whole
numbers. Within this paradigm, the LIF neuron behaves exactly the same way as
does the standard floating point simulated LIF, although exact comparison of
states becomes correctly defined.
| [
{
"created": "Mon, 25 May 2015 12:03:40 GMT",
"version": "v1"
}
] | 2015-05-26 | [
[
"Vidybida",
"A. K.",
""
]
] | The leaky integrate and fire (LIF) neuron represents standard neuronal model used for numerical simulations. The leakage is implemented in the model as exponential decay of trans-membrane voltage towards its resting value. This makes inevitable the usage of machine floating point numbers in the course of simulation. It is known that machine floating point arithmetic is subjected to small inaccuracies, which prevent from exact comparison of floating point quantities. In particular, it is incorrect to decide whether two separate in time states of a simulated system composed of LIF neurons are exactly identical. However, decision of this type is necessary, e.g. to figure periodic dynamical regimes in a reverberating network. Here we offer a simulation paradigm of a LIF neuron, in which neuronal states are described by whole numbers. Within this paradigm, the LIF neuron behaves exactly the same way as does the standard floating point simulated LIF, although exact comparison of states becomes correctly defined. |
1702.03579 | Sayan Biswas | Sayan Biswas | Autonomous line follower robot controlled by cell culture | 6 pages, 10 Figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neuro-electronic hybrid promises to bring up a model architecture for
computing. Such computing architecture could help to bring the power of
biological connection and electronic circuits together for better computing
paradigm. Such paradigms for solving real world tasks with higher accuracy is
on demand now. A robot as a autonomous system is modeled here to navigate
following a particular line. Sensory inputs from robot is directed as input to
the cell culture in response to which motor commands are generated from the
culture.
| [
{
"created": "Sun, 12 Feb 2017 21:18:10 GMT",
"version": "v1"
}
] | 2017-02-14 | [
[
"Biswas",
"Sayan",
""
]
] | Neuro-electronic hybrid promises to bring up a model architecture for computing. Such computing architecture could help to bring the power of biological connection and electronic circuits together for better computing paradigm. Such paradigms for solving real world tasks with higher accuracy is on demand now. A robot as a autonomous system is modeled here to navigate following a particular line. Sensory inputs from robot is directed as input to the cell culture in response to which motor commands are generated from the culture. |
2204.10811 | Taylor Kessinger | Taylor A. Kessinger, Corina E. Tarnita, and Joshua B. Plotkin | Evolution of social norms for moral judgment | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Reputations provide a powerful mechanism to sustain cooperation, as
individuals cooperate with those of good social standing. But how should moral
reputations be updated as we observe social behavior, and when will a
population converge on a common norm of moral assessment? Here we develop a
mathematical model of cooperation conditioned on reputations, for a population
that is stratified into groups. Each group may subscribe to a different social
norm for assessing reputations, and so norms compete as individuals choose to
move from one group to another. We show that a group initially comprising a
minority of the population may nonetheless overtake the entire
population--especially if it adopts the Stern Judging norm, which assigns a bad
reputation to individuals who cooperate with those of bad standing. When
individuals do not change group membership, stratifying reputation information
into groups tends to destabilize cooperation, unless individuals are strongly
insular and favor in-group social interactions. We discuss the implications of
our results for the structure of information flow in a population and the
evolution of social norms of moral judgment.
| [
{
"created": "Fri, 22 Apr 2022 16:44:58 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Oct 2022 23:15:12 GMT",
"version": "v2"
}
] | 2022-10-26 | [
[
"Kessinger",
"Taylor A.",
""
],
[
"Tarnita",
"Corina E.",
""
],
[
"Plotkin",
"Joshua B.",
""
]
] | Reputations provide a powerful mechanism to sustain cooperation, as individuals cooperate with those of good social standing. But how should moral reputations be updated as we observe social behavior, and when will a population converge on a common norm of moral assessment? Here we develop a mathematical model of cooperation conditioned on reputations, for a population that is stratified into groups. Each group may subscribe to a different social norm for assessing reputations, and so norms compete as individuals choose to move from one group to another. We show that a group initially comprising a minority of the population may nonetheless overtake the entire population--especially if it adopts the Stern Judging norm, which assigns a bad reputation to individuals who cooperate with those of bad standing. When individuals do not change group membership, stratifying reputation information into groups tends to destabilize cooperation, unless individuals are strongly insular and favor in-group social interactions. We discuss the implications of our results for the structure of information flow in a population and the evolution of social norms of moral judgment. |
1810.03855 | Ivan Lazarevich | Ivan Lazarevich, Ilya Prokin, Boris Gutkin and Victor Kazantsev | Spikebench: An open benchmark for spike train time-series classification | null | null | 10.1371/journal.pcbi.1010792 | null | q-bio.NC stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern well-performing approaches to neural decoding are based on machine
learning models such as decision tree ensembles and deep neural networks. The
wide range of algorithms that can be utilized to learn from neural spike
trains, which are essentially time-series data, results in the need for diverse
and challenging benchmarks for neural decoding, similar to the ones in the
fields of computer vision and natural language processing. In this work, we
propose a spike train classification benchmark, based on open-access neural
activity datasets and consisting of several learning tasks such as stimulus
type classification, animal's behavioral state prediction, and neuron type
identification. We demonstrate that an approach based on hand-crafted
time-series feature engineering establishes a strong baseline performing on par
with state-of-the-art deep learning-based models for neural decoding. We
release the code allowing to reproduce the reported results.
| [
{
"created": "Tue, 9 Oct 2018 08:35:47 GMT",
"version": "v1"
},
{
"created": "Sat, 11 Jan 2020 15:00:52 GMT",
"version": "v2"
},
{
"created": "Fri, 27 Jan 2023 18:42:03 GMT",
"version": "v3"
}
] | 2023-01-30 | [
[
"Lazarevich",
"Ivan",
""
],
[
"Prokin",
"Ilya",
""
],
[
"Gutkin",
"Boris",
""
],
[
"Kazantsev",
"Victor",
""
]
] | Modern well-performing approaches to neural decoding are based on machine learning models such as decision tree ensembles and deep neural networks. The wide range of algorithms that can be utilized to learn from neural spike trains, which are essentially time-series data, results in the need for diverse and challenging benchmarks for neural decoding, similar to the ones in the fields of computer vision and natural language processing. In this work, we propose a spike train classification benchmark, based on open-access neural activity datasets and consisting of several learning tasks such as stimulus type classification, animal's behavioral state prediction, and neuron type identification. We demonstrate that an approach based on hand-crafted time-series feature engineering establishes a strong baseline performing on par with state-of-the-art deep learning-based models for neural decoding. We release the code allowing to reproduce the reported results. |
1402.1719 | No\'e Chan | E. Avila-Vales, B. Buonomo, N. Chan-Chi | Analysis of a mosquito-borne epidemic model with vector stages and
saturating forces of infection | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study a mosquito-borne epidemic model where the vector population is
distinct in aquatic and adult stages and a saturating effect of disease
transmission is assumed to ocurr when the number of infectious (humans and
mosquitoes) becomes large enough. Several techniques, including center manifold
analysis and sensitivity analysis, have been used to reveal relevant features
of the model dynamics. We determine the existence of stability-instability
thresholds and the individual role played in such thresholds by the model
parameters.
| [
{
"created": "Fri, 7 Feb 2014 17:50:04 GMT",
"version": "v1"
}
] | 2014-02-10 | [
[
"Avila-Vales",
"E.",
""
],
[
"Buonomo",
"B.",
""
],
[
"Chan-Chi",
"N.",
""
]
] | We study a mosquito-borne epidemic model where the vector population is distinct in aquatic and adult stages and a saturating effect of disease transmission is assumed to ocurr when the number of infectious (humans and mosquitoes) becomes large enough. Several techniques, including center manifold analysis and sensitivity analysis, have been used to reveal relevant features of the model dynamics. We determine the existence of stability-instability thresholds and the individual role played in such thresholds by the model parameters. |
0708.0987 | Ping Ao | P Ao | Darwinian Dynamics Implies Developmental Ascendency | 3 pages, latex | Biological Theory 2 (1) (2007) 113-115 | null | null | q-bio.PE q-bio.OT | null | A tendency in biological theorizing is to formulate principles above or equal
to Evolution by Variation and Selection of Darwin and Wallace. In this letter I
analyze one such recent proposal which did so for the developmental ascendency.
I show that though the idea of developmental ascendency is brilliant, this is
in wrong order in the hierarchical structure of biological theories and can
easily generate confusing. Several other examples are also briefly discussed in
the note added.
| [
{
"created": "Tue, 7 Aug 2007 16:52:40 GMT",
"version": "v1"
}
] | 2007-08-08 | [
[
"Ao",
"P",
""
]
] | A tendency in biological theorizing is to formulate principles above or equal to Evolution by Variation and Selection of Darwin and Wallace. In this letter I analyze one such recent proposal which did so for the developmental ascendency. I show that though the idea of developmental ascendency is brilliant, this is in wrong order in the hierarchical structure of biological theories and can easily generate confusing. Several other examples are also briefly discussed in the note added. |
1306.0072 | Genki Ichinose | Genki Ichinose, Masaya Saito, Hiroki Sayama, David Sloan Wilson | Adaptive long-range migration promotes cooperation under tempting
conditions | 7 pages, 9 figures | Scientific Reports 3, 2509 (2013) | 10.1038/srep02509 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Migration is a fundamental trait in humans and animals. Recent studies
investigated the effect of migration on the evolution of cooperation, showing
that contingent migration favors cooperation in spatial structures. In those
studies, only local migration to immediate neighbors was considered, while
long-range migration has not been considered yet, partly because the long-range
migration has been generally regarded as harmful for cooperation as it would
bring the population to a well-mixed state that favors defection. Here, we
studied the effects of adaptive long-range migration on the evolution of
cooperation through agent-based simulations of a spatial Prisoner's Dilemma
game where individuals can jump to a farther site if they are surrounded by
more defectors. Our results show that adaptive long-range migration strongly
promotes cooperation, especially under conditions where the temptation to
defect is considerably high. These findings demonstrate the significance of
adaptive long-range migration for the evolution of cooperation.
| [
{
"created": "Sat, 1 Jun 2013 04:02:23 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Sep 2013 15:35:45 GMT",
"version": "v2"
}
] | 2013-11-05 | [
[
"Ichinose",
"Genki",
""
],
[
"Saito",
"Masaya",
""
],
[
"Sayama",
"Hiroki",
""
],
[
"Wilson",
"David Sloan",
""
]
] | Migration is a fundamental trait in humans and animals. Recent studies investigated the effect of migration on the evolution of cooperation, showing that contingent migration favors cooperation in spatial structures. In those studies, only local migration to immediate neighbors was considered, while long-range migration has not been considered yet, partly because the long-range migration has been generally regarded as harmful for cooperation as it would bring the population to a well-mixed state that favors defection. Here, we studied the effects of adaptive long-range migration on the evolution of cooperation through agent-based simulations of a spatial Prisoner's Dilemma game where individuals can jump to a farther site if they are surrounded by more defectors. Our results show that adaptive long-range migration strongly promotes cooperation, especially under conditions where the temptation to defect is considerably high. These findings demonstrate the significance of adaptive long-range migration for the evolution of cooperation. |
2110.04207 | Nicolas Franco | Nicolas Franco, Pietro Coletti, Lander Willem, Leonardo Angeli, Adrien
Lajot, Steven Abrams, Philippe Beutels, Christel Faes, Niel Hens | Inferring age-specific differences in susceptibility to and
infectiousness upon SARS-CoV-2 infection based on Belgian social contact data | Revised version, 17 pages, supplementary material 15 pages | PLoS Comput Biol 18(3): e1009965 (2022) | 10.1371/journal.pcbi.1009965 | null | q-bio.PE stat.CO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Several important aspects related to SARS-CoV-2 transmission are not well
known due to a lack of appropriate data. However, mathematical and
computational tools can be used to extract part of this information from the
available data, like some hidden age-related characteristics. In this paper, we
present a method to investigate age-specific differences in transmission
parameters related to susceptibility to and infectiousness upon contracting
SARS-CoV-2 infection. More specifically, we use panel-based social contact data
from diary-based surveys conducted in Belgium combined with the next generation
principle to infer the relative incidence and we compare this to real-life
incidence data. Comparing these two allows for the estimation of age-specific
transmission parameters. Our analysis implies the susceptibility in children to
be around half of the susceptibility in adults, and even lower for very young
children (preschooler). However, the probability of adults and the elderly to
contract the infection is decreasing throughout the vaccination campaign,
thereby modifying the picture over time.
| [
{
"created": "Fri, 8 Oct 2021 15:52:14 GMT",
"version": "v1"
},
{
"created": "Mon, 31 Jan 2022 12:31:58 GMT",
"version": "v2"
}
] | 2022-04-01 | [
[
"Franco",
"Nicolas",
""
],
[
"Coletti",
"Pietro",
""
],
[
"Willem",
"Lander",
""
],
[
"Angeli",
"Leonardo",
""
],
[
"Lajot",
"Adrien",
""
],
[
"Abrams",
"Steven",
""
],
[
"Beutels",
"Philippe",
""
],
[
"Faes",
"Christel",
""
],
[
"Hens",
"Niel",
""
]
] | Several important aspects related to SARS-CoV-2 transmission are not well known due to a lack of appropriate data. However, mathematical and computational tools can be used to extract part of this information from the available data, like some hidden age-related characteristics. In this paper, we present a method to investigate age-specific differences in transmission parameters related to susceptibility to and infectiousness upon contracting SARS-CoV-2 infection. More specifically, we use panel-based social contact data from diary-based surveys conducted in Belgium combined with the next generation principle to infer the relative incidence and we compare this to real-life incidence data. Comparing these two allows for the estimation of age-specific transmission parameters. Our analysis implies the susceptibility in children to be around half of the susceptibility in adults, and even lower for very young children (preschooler). However, the probability of adults and the elderly to contract the infection is decreasing throughout the vaccination campaign, thereby modifying the picture over time. |
2109.06377 | Lana Garmire | Bing He, Yao Xiao, Haodong Liang, Qianhui Huang, Yuheng Du, Yijun Li,
David Garmire, Duxin Sun, Lana X. Garmire | ASGARD: A Single-cell Guided pipeline to Aid Repurposing of Drugs | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Intercellular heterogeneity is a major obstacle to successful precision
medicine. Single-cell RNA sequencing (scRNA-seq) technology has enabled
in-depth analysis of intercellular heterogeneity in various diseases. However,
its full potential for precision medicine has yet to be reached. Towards this,
we propose a new drug recommendation system called: A Single-cell Guided
Pipeline to Aid Repurposing of Drugs (ASGARD). ASGARD defines a novel drug
score predicting drugs by considering all cell clusters to address the
intercellular heterogeneity within each patient. We tested ASGARD on multiple
diseases, including breast cancer, acute lymphoblastic leukemia, and
coronavirus disease 2019 (COVID-19). On single-drug therapy, ASGARD shows
significantly better average accuracy (AUC of 0.92) compared to two other
bulk-cell-based drug repurposing methods (AUC of 0.80 and 0.76). It is also
considerably better (AUC of 0.82) than other cell cluster level predicting
methods (AUC of 0.67 and 0.55). In addition, ASGARD is also validated by the
drug response prediction method TRANSACT with Triple-Negative-Breast-Cancer
patient samples. Many top-ranked drugs are either approved by FDA or in
clinical trials treating corresponding diseases. In silico cell-type specific
drop-out experiments using triple-negative breast cancers show the importance
of T cells in the tumor microenvironment in affecting drug predictions. In
conclusion, ASGARD is a promising drug repurposing recommendation tool guided
by single-cell RNA-seq for personalized medicine. ASGARD is free for
educational use at https://github.com/lanagarmire/ASGARD.
| [
{
"created": "Tue, 14 Sep 2021 00:45:11 GMT",
"version": "v1"
},
{
"created": "Sun, 26 Dec 2021 16:25:38 GMT",
"version": "v2"
},
{
"created": "Tue, 31 May 2022 17:36:11 GMT",
"version": "v3"
},
{
"created": "Thu, 22 Dec 2022 21:21:58 GMT",
"version": "v4"
}
] | 2022-12-26 | [
[
"He",
"Bing",
""
],
[
"Xiao",
"Yao",
""
],
[
"Liang",
"Haodong",
""
],
[
"Huang",
"Qianhui",
""
],
[
"Du",
"Yuheng",
""
],
[
"Li",
"Yijun",
""
],
[
"Garmire",
"David",
""
],
[
"Sun",
"Duxin",
""
],
[
"Garmire",
"Lana X.",
""
]
] | Intercellular heterogeneity is a major obstacle to successful precision medicine. Single-cell RNA sequencing (scRNA-seq) technology has enabled in-depth analysis of intercellular heterogeneity in various diseases. However, its full potential for precision medicine has yet to be reached. Towards this, we propose a new drug recommendation system called: A Single-cell Guided Pipeline to Aid Repurposing of Drugs (ASGARD). ASGARD defines a novel drug score predicting drugs by considering all cell clusters to address the intercellular heterogeneity within each patient. We tested ASGARD on multiple diseases, including breast cancer, acute lymphoblastic leukemia, and coronavirus disease 2019 (COVID-19). On single-drug therapy, ASGARD shows significantly better average accuracy (AUC of 0.92) compared to two other bulk-cell-based drug repurposing methods (AUC of 0.80 and 0.76). It is also considerably better (AUC of 0.82) than other cell cluster level predicting methods (AUC of 0.67 and 0.55). In addition, ASGARD is also validated by the drug response prediction method TRANSACT with Triple-Negative-Breast-Cancer patient samples. Many top-ranked drugs are either approved by FDA or in clinical trials treating corresponding diseases. In silico cell-type specific drop-out experiments using triple-negative breast cancers show the importance of T cells in the tumor microenvironment in affecting drug predictions. In conclusion, ASGARD is a promising drug repurposing recommendation tool guided by single-cell RNA-seq for personalized medicine. ASGARD is free for educational use at https://github.com/lanagarmire/ASGARD. |
1603.05497 | Guillermo Lazaro | Guillermo R. Lazaro and Michael F. Hagan | Allosteric control in icosahedral capsid assembly | Bill Gelbart's Festschrift | null | null | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | During the lifecycle of a virus, viral proteins and other components
self-assemble to form a symmetric protein shell called a capsid. This assembly
process is subject to multiple competing constraints, including the need to
form a thermostable shell while avoiding kinetic traps. It has been proposed
that viral assembly satisfies these constraints through allosteric regulation,
including the interconversion of capsid proteins among conformations with
different propensities for assembly. In this article we use computational and
theoretical modeling to explore how such allostery affects the assembly of
icosahedral shells. We simulate assembly under a wide range of protein
concentrations, protein binding affinities, and two different mechanisms of
allosteric control. We find that, above a threshold strength of allosteric
control, assembly becomes robust over a broad range of subunit binding
affinities and concentrations, allowing the formation of highly thermostable
capsids. Our results suggest that allostery can significantly shift the range
of protein binding affinities that lead to successful assembly, and thus should
be accounted for in models that are used to estimate interaction parameters
from experimental data.
| [
{
"created": "Thu, 17 Mar 2016 14:14:39 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Apr 2016 19:21:12 GMT",
"version": "v2"
}
] | 2016-04-28 | [
[
"Lazaro",
"Guillermo R.",
""
],
[
"Hagan",
"Michael F.",
""
]
] | During the lifecycle of a virus, viral proteins and other components self-assemble to form a symmetric protein shell called a capsid. This assembly process is subject to multiple competing constraints, including the need to form a thermostable shell while avoiding kinetic traps. It has been proposed that viral assembly satisfies these constraints through allosteric regulation, including the interconversion of capsid proteins among conformations with different propensities for assembly. In this article we use computational and theoretical modeling to explore how such allostery affects the assembly of icosahedral shells. We simulate assembly under a wide range of protein concentrations, protein binding affinities, and two different mechanisms of allosteric control. We find that, above a threshold strength of allosteric control, assembly becomes robust over a broad range of subunit binding affinities and concentrations, allowing the formation of highly thermostable capsids. Our results suggest that allostery can significantly shift the range of protein binding affinities that lead to successful assembly, and thus should be accounted for in models that are used to estimate interaction parameters from experimental data. |
2405.20668 | Zhiwei Wang | Zhiwei Wang, Yongkang Wang, Wen Zhang | Improving Paratope and Epitope Prediction by Multi-Modal Contrastive
Learning and Interaction Informativeness Estimation | This paper is accepted by IJCAI 2024 | null | null | null | q-bio.BM cs.LG q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurately predicting antibody-antigen binding residues, i.e., paratopes and
epitopes, is crucial in antibody design. However, existing methods solely focus
on uni-modal data (either sequence or structure), disregarding the
complementary information present in multi-modal data, and most methods predict
paratopes and epitopes separately, overlooking their specific spatial
interactions. In this paper, we propose a novel Multi-modal contrastive
learning and Interaction informativeness estimation-based method for Paratope
and Epitope prediction, named MIPE, by using both sequence and structure data
of antibodies and antigens. MIPE implements a multi-modal contrastive learning
strategy, which maximizes representations of binding and non-binding residues
within each modality and meanwhile aligns uni-modal representations towards
effective modal representations. To exploit the spatial interaction
information, MIPE also incorporates an interaction informativeness estimation
that computes the estimated interaction matrices between antibodies and
antigens, thereby approximating them to the actual ones. Extensive experiments
demonstrate the superiority of our method compared to baselines. Additionally,
the ablation studies and visualizations demonstrate the superiority of MIPE
owing to the better representations acquired through multi-modal contrastive
learning and the interaction patterns comprehended by the interaction
informativeness estimation.
| [
{
"created": "Fri, 31 May 2024 08:09:36 GMT",
"version": "v1"
}
] | 2024-06-03 | [
[
"Wang",
"Zhiwei",
""
],
[
"Wang",
"Yongkang",
""
],
[
"Zhang",
"Wen",
""
]
] | Accurately predicting antibody-antigen binding residues, i.e., paratopes and epitopes, is crucial in antibody design. However, existing methods solely focus on uni-modal data (either sequence or structure), disregarding the complementary information present in multi-modal data, and most methods predict paratopes and epitopes separately, overlooking their specific spatial interactions. In this paper, we propose a novel Multi-modal contrastive learning and Interaction informativeness estimation-based method for Paratope and Epitope prediction, named MIPE, by using both sequence and structure data of antibodies and antigens. MIPE implements a multi-modal contrastive learning strategy, which maximizes representations of binding and non-binding residues within each modality and meanwhile aligns uni-modal representations towards effective modal representations. To exploit the spatial interaction information, MIPE also incorporates an interaction informativeness estimation that computes the estimated interaction matrices between antibodies and antigens, thereby approximating them to the actual ones. Extensive experiments demonstrate the superiority of our method compared to baselines. Additionally, the ablation studies and visualizations demonstrate the superiority of MIPE owing to the better representations acquired through multi-modal contrastive learning and the interaction patterns comprehended by the interaction informativeness estimation. |
2201.02055 | Abicumaran Uthamacumaran | Abicumaran Uthamacumaran and Hector Zenil | A Review of Mathematical and Computational Methods in Cancer Dynamics | 68 pages, 3 figures, 2 tables | Frontiers in Oncology (Sec. Molecular and Cellular) July 2022 | 10.3389/fonc.2022.850731 | null | q-bio.OT nlin.CD | http://creativecommons.org/licenses/by-sa/4.0/ | Cancers are complex adaptive diseases regulated by the nonlinear feedback
systems between genetic instabilities, environmental signals, cellular protein
flows, and gene regulatory networks. Understanding the cybernetics of cancer
requires the integration of information dynamics across multidimensional
spatiotemporal scales, including genetic, transcriptional, metabolic,
proteomic, epigenetic, and multi-cellular networks. However, the time-series
analysis of these complex networks remains vastly absent in cancer research.
With longitudinal screening and time-series analysis of cellular dynamics,
universally observed causal patterns pertaining to dynamical systems, may
self-organize in the signaling or gene expression state-space of cancer
triggering processes. A class of these patterns, strange attractors, may be
mathematical biomarkers of cancer progression. The emergence of intracellular
chaos and chaotic cell population dynamics remains a new paradigm in systems
oncology. As such, chaotic and complex dynamics are discussed as mathematical
hallmarks of cancer cell fate dynamics herein. Given the assumption that
time-resolved single-cell datasets are made available, a survey of
interdisciplinary tools and algorithms from complexity theory, are hereby
reviewed to investigate critical phenomena and chaotic dynamics in cancer
ecosystems. To conclude, the perspective cultivates an intuition for
computational systems oncology in terms of nonlinear dynamics, information
theory, inverse problems and complexity. We highlight the limitations we see in
the area of statistical machine learning but the opportunity at combining it
with the symbolic computational power offered by the mathematical tools
explored.
| [
{
"created": "Wed, 5 Jan 2022 05:38:05 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Jan 2022 07:09:11 GMT",
"version": "v2"
},
{
"created": "Thu, 13 Jan 2022 02:19:53 GMT",
"version": "v3"
},
{
"created": "Sun, 23 Jan 2022 21:34:24 GMT",
"version": "v4"
},
{
"created": "Tue, 19 Apr 2022 17:06:55 GMT",
"version": "v5"
},
{
"created": "Sun, 28 Aug 2022 00:22:14 GMT",
"version": "v6"
}
] | 2022-08-30 | [
[
"Uthamacumaran",
"Abicumaran",
""
],
[
"Zenil",
"Hector",
""
]
] | Cancers are complex adaptive diseases regulated by the nonlinear feedback systems between genetic instabilities, environmental signals, cellular protein flows, and gene regulatory networks. Understanding the cybernetics of cancer requires the integration of information dynamics across multidimensional spatiotemporal scales, including genetic, transcriptional, metabolic, proteomic, epigenetic, and multi-cellular networks. However, the time-series analysis of these complex networks remains vastly absent in cancer research. With longitudinal screening and time-series analysis of cellular dynamics, universally observed causal patterns pertaining to dynamical systems, may self-organize in the signaling or gene expression state-space of cancer triggering processes. A class of these patterns, strange attractors, may be mathematical biomarkers of cancer progression. The emergence of intracellular chaos and chaotic cell population dynamics remains a new paradigm in systems oncology. As such, chaotic and complex dynamics are discussed as mathematical hallmarks of cancer cell fate dynamics herein. Given the assumption that time-resolved single-cell datasets are made available, a survey of interdisciplinary tools and algorithms from complexity theory, are hereby reviewed to investigate critical phenomena and chaotic dynamics in cancer ecosystems. To conclude, the perspective cultivates an intuition for computational systems oncology in terms of nonlinear dynamics, information theory, inverse problems and complexity. We highlight the limitations we see in the area of statistical machine learning but the opportunity at combining it with the symbolic computational power offered by the mathematical tools explored. |
q-bio/0609026 | Subhadip Raychaudhuri | Philippos K. Tsourkas, Nicole Baumgarth, Scott I. Simon, Subhadip
Raychaudhuri | Mechanisms of B cell Synapse Formation Predicted by Stochastic
Simulation | 35 pages, 11 figures; Supplemental Materials added | null | 10.1529/biophysj.106.094995 | null | q-bio.QM q-bio.SC | null | The clustering of B cell receptor (BCR) molecules and the formation of the
protein segregation structure known as the immunological synapse appears to
precede antigen (Ag) uptake by B cells. The mature B cell synapse is
characterized by a central cluster of BCR/Ag molecular complexes surrounded by
a ring of LFA-1/ICAM-1 complexes. Recent experimental evidence shows receptor
clustering in B cells can occur via mechanical or signaling-driven processes.
An alternative mechanism of diffusion and affinity-dependent binding has been
proposed to explain synapse formation in the absence of signaling-driven
processes. In this work, we investigated the biophysical mechanisms that drive
immunological synapse formation in B cells across the physiological range of
BCR affinity (KA~10^6-10^10 M-1) through computational modeling. Our
computational approach is based on stochastic simulation of diffusion and
reaction events with a clearly defined mapping between probabilistic parameters
of our model and their physical equivalents. We show that a
diffusion-and-binding mechanism is sufficient to drive synapse formation only
at low BCR affinity and for a relatively stiff B cell membrane that undergoes
little deformation. We thus predict the need for alternative mechanisms: a
difference in the mechanical properties of BCR/Ag and LFA-1/ICAM-1 bonds and/or
signaling driven processes.
| [
{
"created": "Mon, 18 Sep 2006 00:30:09 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Oct 2006 17:32:11 GMT",
"version": "v2"
}
] | 2009-11-13 | [
[
"Tsourkas",
"Philippos K.",
""
],
[
"Baumgarth",
"Nicole",
""
],
[
"Simon",
"Scott I.",
""
],
[
"Raychaudhuri",
"Subhadip",
""
]
] | The clustering of B cell receptor (BCR) molecules and the formation of the protein segregation structure known as the immunological synapse appears to precede antigen (Ag) uptake by B cells. The mature B cell synapse is characterized by a central cluster of BCR/Ag molecular complexes surrounded by a ring of LFA-1/ICAM-1 complexes. Recent experimental evidence shows receptor clustering in B cells can occur via mechanical or signaling-driven processes. An alternative mechanism of diffusion and affinity-dependent binding has been proposed to explain synapse formation in the absence of signaling-driven processes. In this work, we investigated the biophysical mechanisms that drive immunological synapse formation in B cells across the physiological range of BCR affinity (KA~10^6-10^10 M-1) through computational modeling. Our computational approach is based on stochastic simulation of diffusion and reaction events with a clearly defined mapping between probabilistic parameters of our model and their physical equivalents. We show that a diffusion-and-binding mechanism is sufficient to drive synapse formation only at low BCR affinity and for a relatively stiff B cell membrane that undergoes little deformation. We thus predict the need for alternative mechanisms: a difference in the mechanical properties of BCR/Ag and LFA-1/ICAM-1 bonds and/or signaling driven processes. |
1608.02756 | Abderrahim Chafik Dr. | Abderrahim Chafik | miR-34a-5p and miR-34a-3p contribute to the signaling pathway of p53 by
targeting overlapping sets of genes | 15 pages, 3 figures , 1 table | null | null | null | q-bio.MN | http://creativecommons.org/publicdomain/zero/1.0/ | In contrary to the common belief that only one strand of the pre-miRNA is
active (usually the 5p one that is the more abundant) while the second one
(miRNA*) is discarded, functional 5p and 3p have been observed for many miRNAs.
Among those miRNAs is miR-34a which is a target gene of the tumor suppressor
p53. In this paper we have re-examined the role of miR-34a-5p and miR-34a-3p in
the signaling pathway of p53. We found that they target overlapping sets of
genes (MDM2 and THBS1). By a GO enrichment analysis we found that THBS1 is
involved in cancer and metastasis relevant processes. We have also deduce that
p53, MDM2 and miR-34a are linked by a type 1 incoherent FFL that represent a
novel mechanism for accelerating the response of p53 to external stress
signals.
| [
{
"created": "Tue, 9 Aug 2016 10:47:44 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Aug 2016 08:14:23 GMT",
"version": "v2"
}
] | 2016-08-11 | [
[
"Chafik",
"Abderrahim",
""
]
] | In contrary to the common belief that only one strand of the pre-miRNA is active (usually the 5p one that is the more abundant) while the second one (miRNA*) is discarded, functional 5p and 3p have been observed for many miRNAs. Among those miRNAs is miR-34a which is a target gene of the tumor suppressor p53. In this paper we have re-examined the role of miR-34a-5p and miR-34a-3p in the signaling pathway of p53. We found that they target overlapping sets of genes (MDM2 and THBS1). By a GO enrichment analysis we found that THBS1 is involved in cancer and metastasis relevant processes. We have also deduce that p53, MDM2 and miR-34a are linked by a type 1 incoherent FFL that represent a novel mechanism for accelerating the response of p53 to external stress signals. |
1605.02809 | Grzegorz A Rempala | Karly A. Jacobsen and Mark G. Burch and Joseph H. Tien and Grzegorz A.
Rempa{\l}a | The large graph limit of a stochastic epidemic model on a dynamic
multilayer network | 33 pages 2 figures | null | null | null | q-bio.PE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider an SIR-type (Susceptible $\to$ Infected $\to$ Recovered)
stochastic epidemic process with multiple modes of transmission on a contact
network. The network is given by a random graph following a multilayer
configuration model where edges in different layers correspond to potentially
infectious contacts of different types. We assume that the graph structure
evolves in response to the epidemic via activation or deactivation of edges. We
derive a large graph limit theorem that gives a system of ordinary differential
equations (ODEs) describing the evolution of quantities of interest, such as
the proportions of infected and susceptible vertices, as the number of nodes
tends to infinity. Analysis of the limiting system elucidates how the coupling
of edge activation and deactivation to infection status affects disease
dynamics, as illustrated by a two-layer network example with edge types
corresponding to community and healthcare contacts. Our theorem extends some
earlier results deriving the deterministic limit of stochastic SIR processes on
static, single-layer configuration model graphs. We also describe precisely the
conditions for equivalence between our limiting ODEs and the systems obtained
via pair approximation, which are widely used in the epidemiological and
ecological literature to approximate disease dynamics on networks. Potential
applications include modeling Ebola dynamics in West Africa, which was the
motivation for this study.
| [
{
"created": "Tue, 10 May 2016 00:03:38 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Aug 2018 22:31:19 GMT",
"version": "v2"
}
] | 2018-08-22 | [
[
"Jacobsen",
"Karly A.",
""
],
[
"Burch",
"Mark G.",
""
],
[
"Tien",
"Joseph H.",
""
],
[
"Rempała",
"Grzegorz A.",
""
]
] | We consider an SIR-type (Susceptible $\to$ Infected $\to$ Recovered) stochastic epidemic process with multiple modes of transmission on a contact network. The network is given by a random graph following a multilayer configuration model where edges in different layers correspond to potentially infectious contacts of different types. We assume that the graph structure evolves in response to the epidemic via activation or deactivation of edges. We derive a large graph limit theorem that gives a system of ordinary differential equations (ODEs) describing the evolution of quantities of interest, such as the proportions of infected and susceptible vertices, as the number of nodes tends to infinity. Analysis of the limiting system elucidates how the coupling of edge activation and deactivation to infection status affects disease dynamics, as illustrated by a two-layer network example with edge types corresponding to community and healthcare contacts. Our theorem extends some earlier results deriving the deterministic limit of stochastic SIR processes on static, single-layer configuration model graphs. We also describe precisely the conditions for equivalence between our limiting ODEs and the systems obtained via pair approximation, which are widely used in the epidemiological and ecological literature to approximate disease dynamics on networks. Potential applications include modeling Ebola dynamics in West Africa, which was the motivation for this study. |
2403.11277 | Md. Kamrujjaman | Kazi Mehedi Mohammad, Asma Akter Akhi and Md. Kamrujjaman | Bifurcation Analysis of an Influenza A (H1N1) Model with Treatment and
Vaccination | 61 pages; 41 figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | This study focuses on the modeling, mathematical analysis, developing
theories, and numerical simulation of Influenza virus transmission. We have
proved the existence, uniqueness, positivity, and boundedness of the solutions.
Also, investigate the qualitative behavior of the models and find the basic
reproduction number $(\mathcal{R}_0)$ that guarantees the asymptotic stability
of the disease-free and endemic equilibrium points. The local and global
asymptotic stability of the disease free state and endemic equilibrium of the
system is analyzed with the Lyapunov method, Routh-Hurwitz, and other criteria
and presented graphically. This study helps to investigate the effectiveness of
control policy and makes suggestions for alternative control policies.
Bifurcation analyses are carried out to determine prevention strategies.
Transcritical, Hopf, and backward bifurcation analyses are displayed
analytically and numerically to show the dynamics of disease transmission in
different cases. Moreover, analysis of contour plot, box plot, relative biases,
phase portraits are presented to show the influential parameters to curtail the
disease outbreak. We are interested in finding the nature of $\mathcal{R}_0$,
which determines whether the disease dies out or persists in the population.
The findings indicate that the dynamics of the model are determined by the
threshold parameter $\mathcal{R}_0$.
| [
{
"created": "Sun, 17 Mar 2024 17:33:25 GMT",
"version": "v1"
}
] | 2024-03-19 | [
[
"Mohammad",
"Kazi Mehedi",
""
],
[
"Akhi",
"Asma Akter",
""
],
[
"Kamrujjaman",
"Md.",
""
]
] | This study focuses on the modeling, mathematical analysis, developing theories, and numerical simulation of Influenza virus transmission. We have proved the existence, uniqueness, positivity, and boundedness of the solutions. Also, investigate the qualitative behavior of the models and find the basic reproduction number $(\mathcal{R}_0)$ that guarantees the asymptotic stability of the disease-free and endemic equilibrium points. The local and global asymptotic stability of the disease free state and endemic equilibrium of the system is analyzed with the Lyapunov method, Routh-Hurwitz, and other criteria and presented graphically. This study helps to investigate the effectiveness of control policy and makes suggestions for alternative control policies. Bifurcation analyses are carried out to determine prevention strategies. Transcritical, Hopf, and backward bifurcation analyses are displayed analytically and numerically to show the dynamics of disease transmission in different cases. Moreover, analysis of contour plot, box plot, relative biases, phase portraits are presented to show the influential parameters to curtail the disease outbreak. We are interested in finding the nature of $\mathcal{R}_0$, which determines whether the disease dies out or persists in the population. The findings indicate that the dynamics of the model are determined by the threshold parameter $\mathcal{R}_0$. |
1904.12937 | Manuel Baltieri Mr | Manuel Baltieri, Christopher L. Buckley | Generative models as parsimonious descriptions of sensorimotor loops | Commentary on Brette (2019) https://doi.org/10.1017/S0140525X19000049 | Behav Brain Sci 42 (2019) e218 | 10.1017/S0140525X19001353 | null | q-bio.NC cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Bayesian brain hypothesis, predictive processing and variational free
energy minimisation are typically used to describe perceptual processes based
on accurate generative models of the world. However, generative models need not
be veridical representations of the environment. We suggest that they can (and
should) be used to describe sensorimotor relationships relevant for behaviour
rather than precise accounts of the world.
| [
{
"created": "Mon, 29 Apr 2019 20:27:38 GMT",
"version": "v1"
}
] | 2019-12-04 | [
[
"Baltieri",
"Manuel",
""
],
[
"Buckley",
"Christopher L.",
""
]
] | The Bayesian brain hypothesis, predictive processing and variational free energy minimisation are typically used to describe perceptual processes based on accurate generative models of the world. However, generative models need not be veridical representations of the environment. We suggest that they can (and should) be used to describe sensorimotor relationships relevant for behaviour rather than precise accounts of the world. |
1805.08777 | Reza Mosayebi | Reza Mosayebi, Arman Ahmadzadeh, Wayan Wicke, Vahid Jamali, Robert
Schober, and Masoumeh Nasiri-Kenari | Early Cancer Detection in Blood Vessels Using Mobile Nanosensors | null | null | null | null | q-bio.TO cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose using mobile nanosensors (MNSs) for early stage
anomaly detection. For concreteness, we focus on the detection of cancer cells
located in a particular region of a blood vessel. These cancer cells produce
and emit special molecules, so-called biomarkers, which are symptomatic for the
presence of anomaly, into the cardiovascular system. Detection of cancer
biomarkers with conventional blood tests is difficult in the early stages of a
cancer due to the very low concentration of the biomarkers in the samples
taken. However, close to the cancer cells, the concentration of the cancer
biomarkers is high. Hence, detection is possible if a sensor with the ability
to detect these biomarkers is placed in the vicinity of the cancer cells.
Therefore, in this paper, we study the use of MNSs that are injected at a
suitable injection site and can move through the blood vessels of the
cardiovascular system, which potentially contain cancer cells. These MNSs can
be activated by the biomarkers close to the cancer cells, where the biomarker
concentration is sufficiently high. Eventually, the MNSs are collected by a
fusion center (FC) where their activation levels are read and exploited to
declare the presence of anomaly. We analytically derive the biomarker
concentration as well as the probability mass function of the MNSs' activation
levels and validate the obtained results via particle-based simulations. Then,
we derive the optimal decision rule for the FC regarding the presence of
anomaly assuming that the entire network is known at the FC. Finally, for the
FC, we propose a simple sum detector that does not require knowledge of the
network topology. Our simulations reveal that while the optimal detector
achieves a higher performance than the sum detector, both proposed detectors
significantly outperform a benchmark scheme that used fixed nanosensors at the
FC.
| [
{
"created": "Tue, 22 May 2018 08:54:49 GMT",
"version": "v1"
}
] | 2018-05-24 | [
[
"Mosayebi",
"Reza",
""
],
[
"Ahmadzadeh",
"Arman",
""
],
[
"Wicke",
"Wayan",
""
],
[
"Jamali",
"Vahid",
""
],
[
"Schober",
"Robert",
""
],
[
"Nasiri-Kenari",
"Masoumeh",
""
]
] | In this paper, we propose using mobile nanosensors (MNSs) for early stage anomaly detection. For concreteness, we focus on the detection of cancer cells located in a particular region of a blood vessel. These cancer cells produce and emit special molecules, so-called biomarkers, which are symptomatic for the presence of anomaly, into the cardiovascular system. Detection of cancer biomarkers with conventional blood tests is difficult in the early stages of a cancer due to the very low concentration of the biomarkers in the samples taken. However, close to the cancer cells, the concentration of the cancer biomarkers is high. Hence, detection is possible if a sensor with the ability to detect these biomarkers is placed in the vicinity of the cancer cells. Therefore, in this paper, we study the use of MNSs that are injected at a suitable injection site and can move through the blood vessels of the cardiovascular system, which potentially contain cancer cells. These MNSs can be activated by the biomarkers close to the cancer cells, where the biomarker concentration is sufficiently high. Eventually, the MNSs are collected by a fusion center (FC) where their activation levels are read and exploited to declare the presence of anomaly. We analytically derive the biomarker concentration as well as the probability mass function of the MNSs' activation levels and validate the obtained results via particle-based simulations. Then, we derive the optimal decision rule for the FC regarding the presence of anomaly assuming that the entire network is known at the FC. Finally, for the FC, we propose a simple sum detector that does not require knowledge of the network topology. Our simulations reveal that while the optimal detector achieves a higher performance than the sum detector, both proposed detectors significantly outperform a benchmark scheme that used fixed nanosensors at the FC. |
1508.05499 | Peter Clote Peter Clote | Juan Antonio Garcia-Martin and Peter Clote | RNA thermodynamic structural entropy | null | null | 10.1371/journal.pone.0137859 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conformational entropy for atomic-level, three dimensional biomolecules is
known experimentally to play an important role in protein-ligand
discrimination, yet reliable computation of entropy remains a difficult
problem. Here we describe the first two accurate and efficient algorithms to
compute the conformational entropy for RNA secondary structures, with respect
to the Turner energy model, where free energy parameters are determined from UV
aborption experiments. An algorithm to compute the derivational entropy for RNA
secondary structures had previously been introduced, using stochastic context
free grammars (SCFGs). However, the numerical value of derivational entropy
depends heavily on the chosen context free grammar and on the training set used
to estimate rule probabilities. Using data from the Rfam database, we determine
that both of our thermodynamic methods, which agree in numerical value, are
substantially faster than the SCFG method. Thermodynamic structural entropy is
much smaller than derivational entropy, and the correlation between
length-normalized thermodynamic entropy and derivational entropy is moderately
weak to poor. In applications, we plot the structural entropy as a function of
temperature for known thermoswitches, determine that the correlation between
hammerhead ribozyme cleavage activity and total free energy is improved by
including an additional free energy term arising from conformational entropy,
and plot the structural entropy of windows of the HIV-1 genome. Our software
RNAentropy can compute structural entropy for any user-specified temperature,
and supports both the Turner'99 and Turner'04 energy parameters. It follows
that RNAentropy is state-of-the-art software to compute RNA secondary structure
conformational entropy. The software is available at
http://bioinformatics.bc.edu/clotelab/RNAentropy.
| [
{
"created": "Sat, 22 Aug 2015 12:11:09 GMT",
"version": "v1"
}
] | 2016-02-17 | [
[
"Garcia-Martin",
"Juan Antonio",
""
],
[
"Clote",
"Peter",
""
]
] | Conformational entropy for atomic-level, three dimensional biomolecules is known experimentally to play an important role in protein-ligand discrimination, yet reliable computation of entropy remains a difficult problem. Here we describe the first two accurate and efficient algorithms to compute the conformational entropy for RNA secondary structures, with respect to the Turner energy model, where free energy parameters are determined from UV aborption experiments. An algorithm to compute the derivational entropy for RNA secondary structures had previously been introduced, using stochastic context free grammars (SCFGs). However, the numerical value of derivational entropy depends heavily on the chosen context free grammar and on the training set used to estimate rule probabilities. Using data from the Rfam database, we determine that both of our thermodynamic methods, which agree in numerical value, are substantially faster than the SCFG method. Thermodynamic structural entropy is much smaller than derivational entropy, and the correlation between length-normalized thermodynamic entropy and derivational entropy is moderately weak to poor. In applications, we plot the structural entropy as a function of temperature for known thermoswitches, determine that the correlation between hammerhead ribozyme cleavage activity and total free energy is improved by including an additional free energy term arising from conformational entropy, and plot the structural entropy of windows of the HIV-1 genome. Our software RNAentropy can compute structural entropy for any user-specified temperature, and supports both the Turner'99 and Turner'04 energy parameters. It follows that RNAentropy is state-of-the-art software to compute RNA secondary structure conformational entropy. The software is available at http://bioinformatics.bc.edu/clotelab/RNAentropy. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.