id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1911.01321
Daniel Friedman
Daniel A Friedman, Brian R Johnson, Timothy Linksvayer
Decentralized physiology and the molecular basis of social life in eusocial insects
32 pages, 1 Figure
null
null
null
q-bio.PE q-bio.TO
http://creativecommons.org/licenses/by-nc-sa/4.0/
The traditional focus of physiological and functional genomic research is on molecular processes that play out within a single body. In contrast, when social interactions occur, molecular and behavioral responses in interacting individuals can lead to physiological processes that are distributed across multiple individuals. In eusocial insect colonies, such multi-body processes are tightly integrated, involving social communication mechanisms that regulate the physiology of colony members. As a result, conserved physiological mechanisms, for example related to pheromone detection and neural signaling pathways, are deployed in novel contexts and regulate emergent colony traits during the evolutionary origin and elaboration of social complexity. Here we review conceptual frameworks for organismal and colony physiology, and highlight functional genomic, physiological, and behavioral research exploring how colony-level traits arise from physical and chemical interactions among nestmates. We highlight mechanistic work exploring how colony traits arise from physical and chemical interactions among physiologically-specialized nestmates of various developmental stages. We consider similarities and differences between organismal and colony physiology, and make specific predictions based on a decentralized perspective on the function and evolution of colony traits. Integrated models of colony physiological function will be useful to address fundamental questions related to the evolution and ecology of collective behavior in natural systems.
[ { "created": "Mon, 4 Nov 2019 16:32:09 GMT", "version": "v1" } ]
2019-11-05
[ [ "Friedman", "Daniel A", "" ], [ "Johnson", "Brian R", "" ], [ "Linksvayer", "Timothy", "" ] ]
The traditional focus of physiological and functional genomic research is on molecular processes that play out within a single body. In contrast, when social interactions occur, molecular and behavioral responses in interacting individuals can lead to physiological processes that are distributed across multiple individuals. In eusocial insect colonies, such multi-body processes are tightly integrated, involving social communication mechanisms that regulate the physiology of colony members. As a result, conserved physiological mechanisms, for example related to pheromone detection and neural signaling pathways, are deployed in novel contexts and regulate emergent colony traits during the evolutionary origin and elaboration of social complexity. Here we review conceptual frameworks for organismal and colony physiology, and highlight functional genomic, physiological, and behavioral research exploring how colony-level traits arise from physical and chemical interactions among nestmates. We highlight mechanistic work exploring how colony traits arise from physical and chemical interactions among physiologically-specialized nestmates of various developmental stages. We consider similarities and differences between organismal and colony physiology, and make specific predictions based on a decentralized perspective on the function and evolution of colony traits. Integrated models of colony physiological function will be useful to address fundamental questions related to the evolution and ecology of collective behavior in natural systems.
2205.05789
Daniel Hesslow
Daniel Hesslow, Niccol\'o Zanichelli, Pascal Notin, Iacopo Poli and Debora Marks
RITA: a Study on Scaling Up Generative Protein Sequence Models
null
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we introduce RITA: a suite of autoregressive generative models for protein sequences, with up to 1.2 billion parameters, trained on over 280 million protein sequences belonging to the UniRef-100 database. Such generative models hold the promise of greatly accelerating protein design. We conduct the first systematic study of how capabilities evolve with model size for autoregressive transformers in the protein domain: we evaluate RITA models in next amino acid prediction, zero-shot fitness, and enzyme function prediction, showing benefits from increased scale. We release the RITA models openly, to the benefit of the research community.
[ { "created": "Wed, 11 May 2022 22:06:03 GMT", "version": "v1" }, { "created": "Thu, 14 Jul 2022 21:46:47 GMT", "version": "v2" } ]
2022-07-18
[ [ "Hesslow", "Daniel", "" ], [ "Zanichelli", "Niccoló", "" ], [ "Notin", "Pascal", "" ], [ "Poli", "Iacopo", "" ], [ "Marks", "Debora", "" ] ]
In this work we introduce RITA: a suite of autoregressive generative models for protein sequences, with up to 1.2 billion parameters, trained on over 280 million protein sequences belonging to the UniRef-100 database. Such generative models hold the promise of greatly accelerating protein design. We conduct the first systematic study of how capabilities evolve with model size for autoregressive transformers in the protein domain: we evaluate RITA models in next amino acid prediction, zero-shot fitness, and enzyme function prediction, showing benefits from increased scale. We release the RITA models openly, to the benefit of the research community.
2401.00476
Chinwe Anyanwu Dr.
Anyanwu, C. F, Georgewill, O. A. and Obinna, Victoria C
Reproductive outcome in female wistar rats treated with nhexane, dichloromethane and aqueous ethanol extracts of Cucurbita pepo seed
5 tables, 2 figures
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
In developing countries, healthcare challenges and expensive infertility treatments has resulted in resurgent interest in medicinal plants. This study was designed to determine if Curcubita pepo seed can enhance female fertility, by assessing the reproductive outcome in female wistar rats treated with n-hexane (nHE), dichloromethane (DCM) and aqueous ethanol (Aq. Eth) extracts of Curcubita pepo seed. Total of 48 rats randomly grouped into 12 (n=4), were treated for 21 days by oral gavage as follows: A (control) = 0.5ml 20% tween 80 (vehicle); B (positive control) = 10mg/kg clomiphene citrate, C, D & E = 142.86, 285.71 and 428.57 mg/kg nHE; F, G & H = 142.86, 285.71 and 428.57 mg/kg DCM ; and I, J & K =142.86, 285.71 and 428.57 mg/kg Aq.Eth extracts. Group L (positive control 2) = 10mg/kg clomiphene citrate for 8 days. Following treatment, the rats were paired with males for mating, designating the confirmation day as gestational day 0 (GD 0). On GD 20, the animals were laparatomised and reproductive outcome was determined by assessing foetal weight, foetal crown-rump length, litter size, number of implantation and resorption sites. Results showed all extracts had no significant (p >0.05) effect on the reproductive outcome indices. Clomiphene citrate significantly decreased reproductive outcome indices. In conclusion, Cucurbita pepo seed did not enhance the reproductive outcome of treated female rats at the doses and duration used in this study. This finding may serve as a springboard for future studies exploring the effect of C.pepo at different doses or durations.
[ { "created": "Sun, 31 Dec 2023 12:32:05 GMT", "version": "v1" } ]
2024-01-02
[ [ "Anyanwu", "", "" ], [ "F", "C.", "" ], [ "Georgewill", "", "" ], [ "A.", "O.", "" ], [ "Obinna", "", "" ], [ "C", "Victoria", "" ] ]
In developing countries, healthcare challenges and expensive infertility treatments has resulted in resurgent interest in medicinal plants. This study was designed to determine if Curcubita pepo seed can enhance female fertility, by assessing the reproductive outcome in female wistar rats treated with n-hexane (nHE), dichloromethane (DCM) and aqueous ethanol (Aq. Eth) extracts of Curcubita pepo seed. Total of 48 rats randomly grouped into 12 (n=4), were treated for 21 days by oral gavage as follows: A (control) = 0.5ml 20% tween 80 (vehicle); B (positive control) = 10mg/kg clomiphene citrate, C, D & E = 142.86, 285.71 and 428.57 mg/kg nHE; F, G & H = 142.86, 285.71 and 428.57 mg/kg DCM ; and I, J & K =142.86, 285.71 and 428.57 mg/kg Aq.Eth extracts. Group L (positive control 2) = 10mg/kg clomiphene citrate for 8 days. Following treatment, the rats were paired with males for mating, designating the confirmation day as gestational day 0 (GD 0). On GD 20, the animals were laparatomised and reproductive outcome was determined by assessing foetal weight, foetal crown-rump length, litter size, number of implantation and resorption sites. Results showed all extracts had no significant (p >0.05) effect on the reproductive outcome indices. Clomiphene citrate significantly decreased reproductive outcome indices. In conclusion, Cucurbita pepo seed did not enhance the reproductive outcome of treated female rats at the doses and duration used in this study. This finding may serve as a springboard for future studies exploring the effect of C.pepo at different doses or durations.
q-bio/0608040
Ting Chen
Ting Chen and Sharon C. Glotzer
Simulation Studies of A Phenomenological Model for the Assembly of Elongated Virus Capsids
17 pages and 5 figures
null
null
null
q-bio.BM
null
We extend our previously developed general approach (1) to study a phenomenological model in which the simulated packing of hard, attractive spheres on a prolate spheroid surface with convexity constraints produces structures identical to those of prolate virus capsid structures. Our simulation approach combines the traditional Monte Carlo method with the method of random sampling on an ellipsoidal surface and a convex hull searching algorithm. Using this approach we study the assembly and structural origin of non-icosahedral, elongated virus capsids, such as two aberrant flock house virus (FHV) particles and the prolate prohead of bacteriophage phi29, and discuss the implication of our simulation results in the context of recent experimental findings.
[ { "created": "Mon, 28 Aug 2006 21:12:32 GMT", "version": "v1" } ]
2007-05-23
[ [ "Chen", "Ting", "" ], [ "Glotzer", "Sharon C.", "" ] ]
We extend our previously developed general approach (1) to study a phenomenological model in which the simulated packing of hard, attractive spheres on a prolate spheroid surface with convexity constraints produces structures identical to those of prolate virus capsid structures. Our simulation approach combines the traditional Monte Carlo method with the method of random sampling on an ellipsoidal surface and a convex hull searching algorithm. Using this approach we study the assembly and structural origin of non-icosahedral, elongated virus capsids, such as two aberrant flock house virus (FHV) particles and the prolate prohead of bacteriophage phi29, and discuss the implication of our simulation results in the context of recent experimental findings.
2001.02371
Debaldev Jana
N. S. N. V. K. Vyshnavi Devi, Debaldev Jana and M. Lakshmanan
Dynamics of a non-autonomous system with age-structured growth and harvesting of prey and mutually interfering predator with reliance on alternative food
null
null
null
null
q-bio.PE nlin.CD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We perform a detailed analysis of the behaviour of a non-autonomous prey-predator model where age based growth with age discriminatory harvesting in prey and predator's reliance upon alternative food in the absence of that particular prey are considered. We begin by deriving certain sufficient conditions for permanence and positive invariance and then proceed to construct a Lyapunov function to derive some constraints for global attractivity. With the help of continuation theorem we arrive at the best fit criterion to prove the occurrence of a positive periodic solution. Moreover, using Arzela-Ascoli theorem we formulate a proof for a unique positive solution to be almost periodic and we carry out numerical simulation to verify the analytical findings. With the aid of graphs and tables we show the nature of the prey-predator system in response to the alternative food and delays.
[ { "created": "Wed, 8 Jan 2020 04:51:53 GMT", "version": "v1" } ]
2020-01-09
[ [ "Devi", "N. S. N. V. K. Vyshnavi", "" ], [ "Jana", "Debaldev", "" ], [ "Lakshmanan", "M.", "" ] ]
We perform a detailed analysis of the behaviour of a non-autonomous prey-predator model where age based growth with age discriminatory harvesting in prey and predator's reliance upon alternative food in the absence of that particular prey are considered. We begin by deriving certain sufficient conditions for permanence and positive invariance and then proceed to construct a Lyapunov function to derive some constraints for global attractivity. With the help of continuation theorem we arrive at the best fit criterion to prove the occurrence of a positive periodic solution. Moreover, using Arzela-Ascoli theorem we formulate a proof for a unique positive solution to be almost periodic and we carry out numerical simulation to verify the analytical findings. With the aid of graphs and tables we show the nature of the prey-predator system in response to the alternative food and delays.
q-bio/0701051
Brigitte Gaillard
C. Gilbert (DEPE-Iphc), G. Robertson, Y. Le Maho (DEPE-Iphc), Y. Naito, A. Ancel (DEPE-Iphc)
Huddling behavior in emperor penguins : dynamics of huddling
null
Physiol. Behav. 88 (2006) 479-488
10.1016/j.physbeth.2006.04.024
null
q-bio.PE
null
Although huddling was shown to be the key by which emperor penguins (Aptenodytes forsteri) save energy and sustain their breeding fast during the Antarctic winter, the intricacies of this social behavior have been poorly studied. We recorded abiotic variables with data loggers glued to the feathers of eight individually marked emperor penguins to investigate their thermoregulatory behavior and to estimate their "huddling time budget" throughout the breeding season (pairing and incubation period). Contrary to the classic view, huddling episodes were discontinuous and of short and variable duration, lasting 1.6+/-1.7 (S.D.) h on average. Despite heterogeneous huddling groups, birds had equal access to the warmth of the huddles. Throughout the breeding season, males huddled for 38+/-18% (S.D.) of their time, which raised the ambient temperature that birds were exposed to above 0 degrees C (at average external temperatures of -17 degrees C). As a consequence of tight huddles, ambient temperatures were above 20 degrees C during 13+/-12% (S.D.) of their huddling time. Ambient temperatures increased up to 37.5 degrees C, close to birds' body temperature. This complex social behavior therefore enables all breeders to get a regular and equal access to an environment which allows them to save energy and successfully incubate their eggs during the Antarctic winter.
[ { "created": "Tue, 30 Jan 2007 15:16:25 GMT", "version": "v1" } ]
2007-05-23
[ [ "Gilbert", "C.", "", "DEPE-Iphc" ], [ "Robertson", "G.", "", "DEPE-Iphc" ], [ "Maho", "Y. Le", "", "DEPE-Iphc" ], [ "Naito", "Y.", "", "DEPE-Iphc" ], [ "Ancel", "A.", "", "DEPE-Iphc" ] ]
Although huddling was shown to be the key by which emperor penguins (Aptenodytes forsteri) save energy and sustain their breeding fast during the Antarctic winter, the intricacies of this social behavior have been poorly studied. We recorded abiotic variables with data loggers glued to the feathers of eight individually marked emperor penguins to investigate their thermoregulatory behavior and to estimate their "huddling time budget" throughout the breeding season (pairing and incubation period). Contrary to the classic view, huddling episodes were discontinuous and of short and variable duration, lasting 1.6+/-1.7 (S.D.) h on average. Despite heterogeneous huddling groups, birds had equal access to the warmth of the huddles. Throughout the breeding season, males huddled for 38+/-18% (S.D.) of their time, which raised the ambient temperature that birds were exposed to above 0 degrees C (at average external temperatures of -17 degrees C). As a consequence of tight huddles, ambient temperatures were above 20 degrees C during 13+/-12% (S.D.) of their huddling time. Ambient temperatures increased up to 37.5 degrees C, close to birds' body temperature. This complex social behavior therefore enables all breeders to get a regular and equal access to an environment which allows them to save energy and successfully incubate their eggs during the Antarctic winter.
1108.1979
Bohdan Kozarzewski
B. Kozarzewski
Similarity of symbolic sequences
null
null
null
null
q-bio.QM physics.bio-ph q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new numerical characterization of symbolic sequences is proposed. The partition of sequence based on Ke and Tong algorithm is a starting point. Algorithm decomposes original sequence into set of distinct subsequences - a patterns. The set of subsequences common for two symbolic sequences (their intersection) is proposed as a measure of similarity between them. The new similarity measure works well for short (of tens letters) sequences and the very long (of hundred thousand letters) as well. When applied to nucleotide or protein sequences may help to trace possible evolutionary of species. As an illustration, similarity of several sets of nucleotide and amino acid sequences is examined.
[ { "created": "Tue, 9 Aug 2011 16:52:36 GMT", "version": "v1" }, { "created": "Wed, 7 Sep 2011 17:26:08 GMT", "version": "v2" } ]
2011-09-08
[ [ "Kozarzewski", "B.", "" ] ]
A new numerical characterization of symbolic sequences is proposed. The partition of sequence based on Ke and Tong algorithm is a starting point. Algorithm decomposes original sequence into set of distinct subsequences - a patterns. The set of subsequences common for two symbolic sequences (their intersection) is proposed as a measure of similarity between them. The new similarity measure works well for short (of tens letters) sequences and the very long (of hundred thousand letters) as well. When applied to nucleotide or protein sequences may help to trace possible evolutionary of species. As an illustration, similarity of several sets of nucleotide and amino acid sequences is examined.
2211.09642
Angelina Brilliantova
Angelina Brilliantova, Hannah Miller, Ivona Bez\'akov\'a
GRASMOS: Graph Signage Model Selection for Gene Regulatory Networks
null
null
null
null
q-bio.QM q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Signed networks, i.e., networks with positive and negative edges, commonly arise in various domains from social media to epidemiology. Modeling signed networks has many practical applications, including the creation of synthetic data sets for experiments where obtaining real data is difficult. Influential prior works proposed and studied various graph topology models, as well as the problem of selecting the most fitting model for different application domains. However, these topology models are typically unsigned. In this work, we pose a novel Maximum-Likelihood-based optimization problem for modeling signed networks given their topology and showcase it in the context of gene regulation. Regulatory interactions of genes play a key role in organism development, and when broken can lead to serious organism abnormalities and diseases. Our contributions are threefold: First, we design a new class of signage models for a given topology. Based on the parameter setting, we discuss its biological interpretations for gene regulatory networks (GRNs). Second, we design algorithms computing the Maximum Likelihood -- depending on the parameter setting, our algorithms range from closed-form expressions to MCMC sampling. Third, we evaluated the results of our algorithms on synthetic datasets and real-world large GRNs. Our work can lead to the prediction of unknown gene regulations, the generation of biological hypotheses, and realistic GRN benchmark datasets.
[ { "created": "Thu, 17 Nov 2022 16:41:30 GMT", "version": "v1" } ]
2022-11-18
[ [ "Brilliantova", "Angelina", "" ], [ "Miller", "Hannah", "" ], [ "Bezáková", "Ivona", "" ] ]
Signed networks, i.e., networks with positive and negative edges, commonly arise in various domains from social media to epidemiology. Modeling signed networks has many practical applications, including the creation of synthetic data sets for experiments where obtaining real data is difficult. Influential prior works proposed and studied various graph topology models, as well as the problem of selecting the most fitting model for different application domains. However, these topology models are typically unsigned. In this work, we pose a novel Maximum-Likelihood-based optimization problem for modeling signed networks given their topology and showcase it in the context of gene regulation. Regulatory interactions of genes play a key role in organism development, and when broken can lead to serious organism abnormalities and diseases. Our contributions are threefold: First, we design a new class of signage models for a given topology. Based on the parameter setting, we discuss its biological interpretations for gene regulatory networks (GRNs). Second, we design algorithms computing the Maximum Likelihood -- depending on the parameter setting, our algorithms range from closed-form expressions to MCMC sampling. Third, we evaluated the results of our algorithms on synthetic datasets and real-world large GRNs. Our work can lead to the prediction of unknown gene regulations, the generation of biological hypotheses, and realistic GRN benchmark datasets.
1612.01571
Konstantin Blyuss
G. Neofytou, Y.N. Kyrychko, K.B. Blyuss
Time-delayed model of RNA interference
31 pages, 11 figures, accepted for publication to Ecological Complexity
Ecol. Compl. 30, 11-25 (2017)
10.1016/j.ecocom.2016.12.003
null
q-bio.QM nlin.CD q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
RNA interference (RNAi) is a fundamental cellular process that inhibits gene expression through cleavage and destruction of target mRNA. It is responsible for a number of important intracellular functions, from being the first line of immune defence against pathogens to regulating development and morphogenesis. In this paper we consider a mathematical model of RNAi with particular emphasis on time delays associated with two aspects of primed amplification: binding of siRNA to aberrant RNA, and binding of siRNA to mRNA, both of which result in the expanded production of dsRNA responsible for RNA silencing. Analytical and numerical stability analyses are performed to identify regions of stability of different steady states and to determine conditions on parameters that lead to instability. Our results suggest that while the original model without time delays exhibits a bi-stability due to the presence of a hysteresis loop, under the influence of time delays, one of the two steady states with the high (default) or small (silenced) concentration of mRNA can actually lose its stability via a Hopf bifurcation. This leads to the co-existence of a stable steady state and a stable periodic orbit, which has a profound effect on the dynamics of the system.
[ { "created": "Mon, 5 Dec 2016 21:55:52 GMT", "version": "v1" } ]
2017-07-12
[ [ "Neofytou", "G.", "" ], [ "Kyrychko", "Y. N.", "" ], [ "Blyuss", "K. B.", "" ] ]
RNA interference (RNAi) is a fundamental cellular process that inhibits gene expression through cleavage and destruction of target mRNA. It is responsible for a number of important intracellular functions, from being the first line of immune defence against pathogens to regulating development and morphogenesis. In this paper we consider a mathematical model of RNAi with particular emphasis on time delays associated with two aspects of primed amplification: binding of siRNA to aberrant RNA, and binding of siRNA to mRNA, both of which result in the expanded production of dsRNA responsible for RNA silencing. Analytical and numerical stability analyses are performed to identify regions of stability of different steady states and to determine conditions on parameters that lead to instability. Our results suggest that while the original model without time delays exhibits a bi-stability due to the presence of a hysteresis loop, under the influence of time delays, one of the two steady states with the high (default) or small (silenced) concentration of mRNA can actually lose its stability via a Hopf bifurcation. This leads to the co-existence of a stable steady state and a stable periodic orbit, which has a profound effect on the dynamics of the system.
2206.11416
Sergiy Koshkin
Sergiy Koshkin, Zachary Zalles, Michael F. Tobin, Nicolas Toumbacaris, Cameron Spiess
Optimal allocation in annual plants with density dependent fitness
29 pages, 12 figures
Theory in Biosciences, 140 (2021) no.2, 177-196
10.1007/s12064-021-00343-9
null
q-bio.PE math.OC
http://creativecommons.org/licenses/by/4.0/
We study optimal two-sector (vegetative and reproductive) allocation models of annual plants in temporally variable environments, that incorporate effects of density dependent lifetime variability and juvenile mortality in a fitness function whose expected value is maximized. Only special cases of arithmetic and geometric mean maximizers have previously been considered in the literature, and we also allow a wider range of production functions with diminishing returns. The model predicts that the time of maturity is pushed to an earlier date as the correlation between individual lifetimes increases, and while optimal schedules are bang-bang at the extremes, the transition is mediated by schedules where vegetative growth is mixed with reproduction for a wide intermediate range. The mixed growth lasts longer when the production function is less concave allowing for better leveraging of plant size when generating seeds. Analytic estimates are obtained for the power means that interpolate between arithmetic and geometric mean and correspond to partially correlated lifetime distributions.
[ { "created": "Wed, 22 Jun 2022 23:21:38 GMT", "version": "v1" } ]
2022-06-24
[ [ "Koshkin", "Sergiy", "" ], [ "Zalles", "Zachary", "" ], [ "Tobin", "Michael F.", "" ], [ "Toumbacaris", "Nicolas", "" ], [ "Spiess", "Cameron", "" ] ]
We study optimal two-sector (vegetative and reproductive) allocation models of annual plants in temporally variable environments, that incorporate effects of density dependent lifetime variability and juvenile mortality in a fitness function whose expected value is maximized. Only special cases of arithmetic and geometric mean maximizers have previously been considered in the literature, and we also allow a wider range of production functions with diminishing returns. The model predicts that the time of maturity is pushed to an earlier date as the correlation between individual lifetimes increases, and while optimal schedules are bang-bang at the extremes, the transition is mediated by schedules where vegetative growth is mixed with reproduction for a wide intermediate range. The mixed growth lasts longer when the production function is less concave allowing for better leveraging of plant size when generating seeds. Analytic estimates are obtained for the power means that interpolate between arithmetic and geometric mean and correspond to partially correlated lifetime distributions.
0706.0760
Jing Qin
Emma Y. Jin, Jing Qin and Christian M. Reidys
Neutral Networks of Sequence to Shape Maps
24 pages,4 figures
null
null
null
q-bio.QM math-ph math.CO math.MP q-bio.BM
null
In this paper we present a novel framework for sequence to shape maps. These combinatorial maps realize exponentially many shapes, and have preimages which contain extended connected subgraphs of diameter n (neutral networks). We prove that all basic properties of RNA folding maps also hold for combinatorial maps. Our construction is as follows: suppose we are given a graph $H$ over the $\{1 >...,n\}$ and an alphabet of nucleotides together with a symmetric relation $\mathcal{R}$, implied by base pairing rules. Then the shape of a sequence of length n is the maximal H subgraph in which all pairs of nucleotides incident to H-edges satisfy $\mathcal{R}$. Our main result is to prove the existence of at least $\sqrt{2}^{n-1}$ shapes with extended neutral networks, i.e. shapes that have a preimage with diameter $n$ and a connected component of size at least $(\frac{1+\sqrt{5}}{2})^n+(\frac{1-\sqrt{5}}{2})^n$. Furthermore, we show that there exists a certain subset of shapes which carries a natural graph structure. In this graph any two shapes are connected by a path of shapes with respective neutral networks of distance one. We finally discuss our results and provide a comparison with RNA folding maps.
[ { "created": "Wed, 6 Jun 2007 04:21:12 GMT", "version": "v1" }, { "created": "Sat, 4 Aug 2007 07:53:26 GMT", "version": "v2" } ]
2009-09-29
[ [ "Jin", "Emma Y.", "" ], [ "Qin", "Jing", "" ], [ "Reidys", "Christian M.", "" ] ]
In this paper we present a novel framework for sequence to shape maps. These combinatorial maps realize exponentially many shapes, and have preimages which contain extended connected subgraphs of diameter n (neutral networks). We prove that all basic properties of RNA folding maps also hold for combinatorial maps. Our construction is as follows: suppose we are given a graph $H$ over the $\{1 >...,n\}$ and an alphabet of nucleotides together with a symmetric relation $\mathcal{R}$, implied by base pairing rules. Then the shape of a sequence of length n is the maximal H subgraph in which all pairs of nucleotides incident to H-edges satisfy $\mathcal{R}$. Our main result is to prove the existence of at least $\sqrt{2}^{n-1}$ shapes with extended neutral networks, i.e. shapes that have a preimage with diameter $n$ and a connected component of size at least $(\frac{1+\sqrt{5}}{2})^n+(\frac{1-\sqrt{5}}{2})^n$. Furthermore, we show that there exists a certain subset of shapes which carries a natural graph structure. In this graph any two shapes are connected by a path of shapes with respective neutral networks of distance one. We finally discuss our results and provide a comparison with RNA folding maps.
1804.02952
J.H. van Hateren
J. H. van Hateren
A theory of consciousness: computation, algorithm, and neurobiological realization
minor revision, 21 pages, 10 figures, 1 table
Biological Cybernetics 113, 357-372 (2019)
10.1007/s00422-019-00803-y
null
q-bio.NC cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The most enigmatic aspect of consciousness is the fact that it is felt, as a subjective sensation. The theory proposed here aims to explain this particular aspect. The theory encompasses both the computation that is presumably involved and the way in which that computation may be realized in the brain's neurobiology. It is assumed that the brain makes an internal estimate of an individual's own evolutionary fitness, which can be shown to produce a special, distinct form of causation. Communicating components of the fitness estimate (either for external or internal use) requires inverting them. Such inversion can be performed by the thalamocortical feedback loop in the mammalian brain, if that loop is operating in a switched, dual-stage mode. A first (nonconscious) stage produces forward estimates, whereas the second (conscious) stage inverts those estimates. It is argued that inversion produces another special, distinct form of causation, which is spatially localized and is plausibly sensed as the feeling of consciousness.
[ { "created": "Mon, 9 Apr 2018 13:02:35 GMT", "version": "v1" }, { "created": "Wed, 1 Aug 2018 09:44:18 GMT", "version": "v2" }, { "created": "Mon, 21 Jan 2019 10:39:29 GMT", "version": "v3" }, { "created": "Thu, 20 Jun 2019 09:00:48 GMT", "version": "v4" } ]
2019-07-29
[ [ "van Hateren", "J. H.", "" ] ]
The most enigmatic aspect of consciousness is the fact that it is felt, as a subjective sensation. The theory proposed here aims to explain this particular aspect. The theory encompasses both the computation that is presumably involved and the way in which that computation may be realized in the brain's neurobiology. It is assumed that the brain makes an internal estimate of an individual's own evolutionary fitness, which can be shown to produce a special, distinct form of causation. Communicating components of the fitness estimate (either for external or internal use) requires inverting them. Such inversion can be performed by the thalamocortical feedback loop in the mammalian brain, if that loop is operating in a switched, dual-stage mode. A first (nonconscious) stage produces forward estimates, whereas the second (conscious) stage inverts those estimates. It is argued that inversion produces another special, distinct form of causation, which is spatially localized and is plausibly sensed as the feeling of consciousness.
1207.0689
Ramon Ferrer i Cancho
Ramon Ferrer-i-Cancho, N\'uria Forns, Antoni Hern\'andez-Fern\'andez, Gemma Bel-Enguix and Jaume Baixeries
The challenges of statistical patterns of language: the case of Menzerath's law in genomes
Title changed, abstract and introduction improved and little corrections on the statistical arguments
null
10.1002/cplx.21429
null
q-bio.GN cs.CE physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The importance of statistical patterns of language has been debated over decades. Although Zipf's law is perhaps the most popular case, recently, Menzerath's law has begun to be involved. Menzerath's law manifests in language, music and genomes as a tendency of the mean size of the parts to decrease as the number of parts increases in many situations. This statistical regularity emerges also in the context of genomes, for instance, as a tendency of species with more chromosomes to have a smaller mean chromosome size. It has been argued that the instantiation of this law in genomes is not indicative of any parallel between language and genomes because (a) the law is inevitable and (b) non-coding DNA dominates genomes. Here mathematical, statistical and conceptual challenges of these criticisms are discussed. Two major conclusions are drawn: the law is not inevitable and languages also have a correlate of non-coding DNA. However, the wide range of manifestations of the law in and outside genomes suggests that the striking similarities between non-coding DNA and certain linguistics units could be anecdotal for understanding the recurrence of that statistical law.
[ { "created": "Tue, 3 Jul 2012 14:10:38 GMT", "version": "v1" }, { "created": "Sat, 29 Sep 2012 07:07:19 GMT", "version": "v2" } ]
2014-12-03
[ [ "Ferrer-i-Cancho", "Ramon", "" ], [ "Forns", "Núria", "" ], [ "Hernández-Fernández", "Antoni", "" ], [ "Bel-Enguix", "Gemma", "" ], [ "Baixeries", "Jaume", "" ] ]
The importance of statistical patterns of language has been debated over decades. Although Zipf's law is perhaps the most popular case, recently, Menzerath's law has begun to be involved. Menzerath's law manifests in language, music and genomes as a tendency of the mean size of the parts to decrease as the number of parts increases in many situations. This statistical regularity emerges also in the context of genomes, for instance, as a tendency of species with more chromosomes to have a smaller mean chromosome size. It has been argued that the instantiation of this law in genomes is not indicative of any parallel between language and genomes because (a) the law is inevitable and (b) non-coding DNA dominates genomes. Here mathematical, statistical and conceptual challenges of these criticisms are discussed. Two major conclusions are drawn: the law is not inevitable and languages also have a correlate of non-coding DNA. However, the wide range of manifestations of the law in and outside genomes suggests that the striking similarities between non-coding DNA and certain linguistics units could be anecdotal for understanding the recurrence of that statistical law.
1910.01940
Marc Robinson-Rechavi
Marc Robinson-Rechavi
Molecular evolution and gene function
To be published in book "Phylogenomics" (Nicolas Galtier, C\'eline Scornavacca, Fr\'ed\'eric Delsuc, Eds.)
null
null
null
q-bio.PE q-bio.GN
http://creativecommons.org/licenses/by/4.0/
One of the basic questions of phylogenomics is how gene function evolves, whether among species or inside gene families. In this chapter, we provide a brief overview of the problems associated with defining gene function in a manner which allows comparisons which are both large scale and evolutionarily relevant. The main source of functional data, despite its limitations, is transcriptomics. Functional data provides information on evolutionary mechanisms primarily by showing which functional classes of genes evolve under stronger or weaker purifying or adaptive selection, and on which classes of mutations (e.g., substitutions or duplications). However, the example of the "ortholog conjecture" shows that we are still not at a point where we can confidently study phylogenomically the evolution of gene function at a precise scale.
[ { "created": "Fri, 4 Oct 2019 13:12:53 GMT", "version": "v1" }, { "created": "Mon, 7 Oct 2019 07:50:21 GMT", "version": "v2" } ]
2019-10-08
[ [ "Robinson-Rechavi", "Marc", "" ] ]
One of the basic questions of phylogenomics is how gene function evolves, whether among species or inside gene families. In this chapter, we provide a brief overview of the problems associated with defining gene function in a manner which allows comparisons which are both large scale and evolutionarily relevant. The main source of functional data, despite its limitations, is transcriptomics. Functional data provides information on evolutionary mechanisms primarily by showing which functional classes of genes evolve under stronger or weaker purifying or adaptive selection, and on which classes of mutations (e.g., substitutions or duplications). However, the example of the "ortholog conjecture" shows that we are still not at a point where we can confidently study phylogenomically the evolution of gene function at a precise scale.
1910.02532
Jesse Geerts
Jesse P. Geerts, Kimberly L. Stachenfeld, Neil Burgess
Probabilistic Successor Representations with Kalman Temporal Differences
Conference on Cognitive Computational Neuroscience
null
10.32470/CCN.2019.1323-0
null
q-bio.NC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The effectiveness of Reinforcement Learning (RL) depends on an animal's ability to assign credit for rewards to the appropriate preceding stimuli. One aspect of understanding the neural underpinnings of this process involves understanding what sorts of stimulus representations support generalisation. The Successor Representation (SR), which enforces generalisation over states that predict similar outcomes, has become an increasingly popular model in this space of inquiries. Another dimension of credit assignment involves understanding how animals handle uncertainty about learned associations, using probabilistic methods such as Kalman Temporal Differences (KTD). Combining these approaches, we propose using KTD to estimate a distribution over the SR. KTD-SR captures uncertainty about the estimated SR as well as covariances between different long-term predictions. We show that because of this, KTD-SR exhibits partial transition revaluation as humans do in this experiment without additional replay, unlike the standard TD-SR algorithm. We conclude by discussing future applications of the KTD-SR as a model of the interaction between predictive and probabilistic animal reasoning.
[ { "created": "Sun, 6 Oct 2019 21:32:46 GMT", "version": "v1" } ]
2019-10-08
[ [ "Geerts", "Jesse P.", "" ], [ "Stachenfeld", "Kimberly L.", "" ], [ "Burgess", "Neil", "" ] ]
The effectiveness of Reinforcement Learning (RL) depends on an animal's ability to assign credit for rewards to the appropriate preceding stimuli. One aspect of understanding the neural underpinnings of this process involves understanding what sorts of stimulus representations support generalisation. The Successor Representation (SR), which enforces generalisation over states that predict similar outcomes, has become an increasingly popular model in this space of inquiries. Another dimension of credit assignment involves understanding how animals handle uncertainty about learned associations, using probabilistic methods such as Kalman Temporal Differences (KTD). Combining these approaches, we propose using KTD to estimate a distribution over the SR. KTD-SR captures uncertainty about the estimated SR as well as covariances between different long-term predictions. We show that because of this, KTD-SR exhibits partial transition revaluation as humans do in this experiment without additional replay, unlike the standard TD-SR algorithm. We conclude by discussing future applications of the KTD-SR as a model of the interaction between predictive and probabilistic animal reasoning.
1705.08670
Giovanna Jona Lasinio professor
Giovanna Jona Lasinio, Alessio Pollice, Eric Marcon, Elisa Anna Fano
Assessing the role of the spatial scale in the analysis of lagoon biodiversity. A case-study on the macrobenthic fauna of the Po River Delta
Keywords: Lagoon biodiversity; Macrobenthic fauna; Tsallis entropy; Biodiversity partitioning; Mixed effects models
Ecological Indicators, Volume 80, September 2017, Pages 303-315, ISSN 1470-160X, (http://www.sciencedirect.com/science/article/pii/S1470160X17302923)
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The analysis of benthic assemblages is a valuable tool to describe the ecological status of transitional water ecosystems, but species are extremely sensitive and respond to both microhabitat and seasonal differences. The identification of changes in the composition of the macrobenthic community in specific microhabitats can then be used as an "early warning" for environmental changes which may affect the economic and ecological importance of lagoons, through their provision of Ecosystem Services. From a conservational point of view, the appropriate definition of the spatial aggregation level of microhabitats or local communities is of crucial importance. The main objective of this work is to assess the role of the spatial scale in the analysis of lagoon biodiversity. First, we analyze the variation in the sample coverage for alternative aggregations of the monitoring stations in three lagoons of the Po River Delta. Then, we analyze the variation of a class of entropy indices by mixed effects models, properly accounting for the fixed effects of biotic and abiotic factors and random effects ruled by nested sources of variability corresponding to alternative definitions of local communities. Finally, we address biodiversity partitioning by a generalized diversity measure, namely the Tsallis entropy, and for alternative definitions of the local communities. The main results obtained by the proposed statistical protocol are presented, discussed and framed in the ecological context.
[ { "created": "Wed, 24 May 2017 09:36:28 GMT", "version": "v1" } ]
2017-05-25
[ [ "Lasinio", "Giovanna Jona", "" ], [ "Pollice", "Alessio", "" ], [ "Marcon", "Eric", "" ], [ "Fano", "Elisa Anna", "" ] ]
The analysis of benthic assemblages is a valuable tool to describe the ecological status of transitional water ecosystems, but species are extremely sensitive and respond to both microhabitat and seasonal differences. The identification of changes in the composition of the macrobenthic community in specific microhabitats can then be used as an "early warning" for environmental changes which may affect the economic and ecological importance of lagoons, through their provision of Ecosystem Services. From a conservational point of view, the appropriate definition of the spatial aggregation level of microhabitats or local communities is of crucial importance. The main objective of this work is to assess the role of the spatial scale in the analysis of lagoon biodiversity. First, we analyze the variation in the sample coverage for alternative aggregations of the monitoring stations in three lagoons of the Po River Delta. Then, we analyze the variation of a class of entropy indices by mixed effects models, properly accounting for the fixed effects of biotic and abiotic factors and random effects ruled by nested sources of variability corresponding to alternative definitions of local communities. Finally, we address biodiversity partitioning by a generalized diversity measure, namely the Tsallis entropy, and for alternative definitions of the local communities. The main results obtained by the proposed statistical protocol are presented, discussed and framed in the ecological context.
1308.2145
Sergei Maslov
Tin Yau Pang and Sergei Maslov
Universal distribution of component frequencies in biological and technological systems
null
Proc Natl Acad Sci USA (PNAS) 110 (2013) 6235-6239
10.1073/pnas.1217795110
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bacterial genomes and large-scale computer software projects both consist of a large number of components (genes or software packages) connected via a network of mutual dependencies. Components can be easily added or removed from individual systems and their usage frequencies vary over many orders of magnitude. We study this frequency distribution in genomes of ~500 bacterial species and in over 2 million of Linux computers and find that in both cases it is described by the same scale-free power law distribution with an additional peak near the tail of the distribution corresponding to nearly universal components. We argue that this is a general property of any modular system with a multi-layered dependency network. We demonstrate that the frequency of a component is positively correlated with its dependency degree given by the total number of upstream components whose operation directly or indirectly depends on the selected component. The observed frequency/dependency degree distributions are reproduced in a simple mathematically tractable model introduced and analyzed in this study.
[ { "created": "Fri, 9 Aug 2013 15:01:03 GMT", "version": "v1" } ]
2013-08-12
[ [ "Pang", "Tin Yau", "" ], [ "Maslov", "Sergei", "" ] ]
Bacterial genomes and large-scale computer software projects both consist of a large number of components (genes or software packages) connected via a network of mutual dependencies. Components can be easily added or removed from individual systems and their usage frequencies vary over many orders of magnitude. We study this frequency distribution in genomes of ~500 bacterial species and in over 2 million of Linux computers and find that in both cases it is described by the same scale-free power law distribution with an additional peak near the tail of the distribution corresponding to nearly universal components. We argue that this is a general property of any modular system with a multi-layered dependency network. We demonstrate that the frequency of a component is positively correlated with its dependency degree given by the total number of upstream components whose operation directly or indirectly depends on the selected component. The observed frequency/dependency degree distributions are reproduced in a simple mathematically tractable model introduced and analyzed in this study.
2008.01202
Chris Groendyke
Chris Groendyke and Adam Combs
Modifying the Network-Based Stochastic SEIR Model to Account for Quarantine
26 pages, 7 figures
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article, we present a modification to the network-based stochastic SEIR epidemic model which allows for modifications to the underlying contact network to account for the effects of quarantine. We also discuss the changes needed to the model to incorporate situations where some proportion of the individuals who are infected remain asymptomatic throughout the course of the disease. Using a generic network model where every potential contact exists with the same common probability, we conduct a simulation study in which we vary four key model parameters (transmission rate, probability of remaining asymptomatic, and the mean lengths of time spent in the Exposed and Infectious disease states) and examine the resulting impacts on various metrics of epidemic severity, including the effective reproduction number. We find that the mean length of time spent in the Infectious state and the transmission rate are the most important model parameters, while the mean length of time spent in the Exposed state and the probability of remaining asymptomatic are less important.
[ { "created": "Mon, 3 Aug 2020 21:19:23 GMT", "version": "v1" } ]
2020-08-05
[ [ "Groendyke", "Chris", "" ], [ "Combs", "Adam", "" ] ]
In this article, we present a modification to the network-based stochastic SEIR epidemic model which allows for modifications to the underlying contact network to account for the effects of quarantine. We also discuss the changes needed to the model to incorporate situations where some proportion of the individuals who are infected remain asymptomatic throughout the course of the disease. Using a generic network model where every potential contact exists with the same common probability, we conduct a simulation study in which we vary four key model parameters (transmission rate, probability of remaining asymptomatic, and the mean lengths of time spent in the Exposed and Infectious disease states) and examine the resulting impacts on various metrics of epidemic severity, including the effective reproduction number. We find that the mean length of time spent in the Infectious state and the transmission rate are the most important model parameters, while the mean length of time spent in the Exposed state and the probability of remaining asymptomatic are less important.
2208.03726
Diederik Aerts
Diederik Aerts and Jonito Aerts Argu\"elles
Human Perception as a Phenomenon of Quantization
28 pages, 8 figures
Entropy 24, 1207 (2022)
10.3390/e24091207
null
q-bio.NC cs.CL quant-ph
http://creativecommons.org/licenses/by/4.0/
For two decades, the formalism of quantum mechanics has been successfully used to describe human decision processes, situations of heuristic reasoning, and the contextuality of concepts and their combinations. The phenomenon of 'categorical perception' has put us on track to find a possible deeper cause of the presence of this quantum structure in human cognition. Thus, we show that in an archetype of human perception consisting of the reconciliation of a bottom up stimulus with a top down cognitive expectation pattern, there arises the typical warping of categorical perception, where groups of stimuli clump together to form quanta, which move away from each other and lead to a discretization of a dimension. The individual concepts, which are these quanta, can be modeled by a quantum prototype theory with the square of the absolute value of a corresponding Schr\"odinger wave function as the fuzzy prototype structure, and the superposition of two such wave functions accounts for the interference pattern that occurs when these concepts are combined. Using a simple quantum measurement model, we analyze this archetype of human perception, provide an overview of the experimental evidence base for categorical perception with the phenomenon of warping leading to quantization, and illustrate our analyses with two examples worked out in detail.
[ { "created": "Sun, 7 Aug 2022 13:59:23 GMT", "version": "v1" }, { "created": "Sat, 8 Oct 2022 21:24:47 GMT", "version": "v2" } ]
2023-02-27
[ [ "Aerts", "Diederik", "" ], [ "Arguëlles", "Jonito Aerts", "" ] ]
For two decades, the formalism of quantum mechanics has been successfully used to describe human decision processes, situations of heuristic reasoning, and the contextuality of concepts and their combinations. The phenomenon of 'categorical perception' has put us on track to find a possible deeper cause of the presence of this quantum structure in human cognition. Thus, we show that in an archetype of human perception consisting of the reconciliation of a bottom up stimulus with a top down cognitive expectation pattern, there arises the typical warping of categorical perception, where groups of stimuli clump together to form quanta, which move away from each other and lead to a discretization of a dimension. The individual concepts, which are these quanta, can be modeled by a quantum prototype theory with the square of the absolute value of a corresponding Schr\"odinger wave function as the fuzzy prototype structure, and the superposition of two such wave functions accounts for the interference pattern that occurs when these concepts are combined. Using a simple quantum measurement model, we analyze this archetype of human perception, provide an overview of the experimental evidence base for categorical perception with the phenomenon of warping leading to quantization, and illustrate our analyses with two examples worked out in detail.
2307.08828
Abdullah Alqarni
Abdullah Alqarni, Wei Wen, Ben C.P. Lam, Nicole Kochan, Henry Brodaty, Perminder S. Sachdev, Jiyang Jiang
Sex Differences in 6-Year Progression of White Matter Hyperintensities in Non-Demented Older Adults: Sydney Memory and Ageing Study
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Objectives: To examine sex differences in the associations between vascular risk factors and 6-year changes in the volume of white matter hyperintensities (WMH), and between changes in WMH volumes and changes in cognitive performance, in a cohort of non-demented older adults. Methods: WMH volumes at 3 time-points (baseline, and 2- and 6-year follow-up) were automatically quantified in participants of Sydney Memory and Ageing Study (N = 605; age range = 70-92 years; 54.78% female). Linear mixed models were applied to examine the effects of vascular risk factors and cognitive consequences of the progression of WMH, as well as the sex moderation effects in the associations. Results: Total (TWMH), periventricular (PVWMH), and deep (DWMH) WMH volumes increased by 9.47%, 7.70%, and 11.78% per year, respectively. No sex differences were found in WMH progression rates. After Bonferroni correction, increases in PVWMH volumes over time were associated with decline in global cognition, especially in visuospatial and memory domains. Men with more increases in PVWMH volumes over time had greater declines in visuospatial abilities. Moreover, higher average TWMH volumes across time-points were associated with poorer average performance in processing speed and executive function domains across time. Higher average PVWMH volumes across time-points were also associated with worse average performance in the executive function domain over time, among women but not men. Conclusion: The findings highlighted sex differences in the associations between WMH progression and cognition decline over time, suggesting sex-specific strategies in managing WMH accumulations in ageing. Keywords: Cerebral small vessel disease, white matter hyperintensities, sex differences, brain ageing, cognitive decline.
[ { "created": "Mon, 17 Jul 2023 20:43:55 GMT", "version": "v1" } ]
2023-07-19
[ [ "Alqarni", "Abdullah", "" ], [ "Wen", "Wei", "" ], [ "Lam", "Ben C. P.", "" ], [ "Kochan", "Nicole", "" ], [ "Brodaty", "Henry", "" ], [ "Sachdev", "Perminder S.", "" ], [ "Jiang", "Jiyang", "" ] ]
Objectives: To examine sex differences in the associations between vascular risk factors and 6-year changes in the volume of white matter hyperintensities (WMH), and between changes in WMH volumes and changes in cognitive performance, in a cohort of non-demented older adults. Methods: WMH volumes at 3 time-points (baseline, and 2- and 6-year follow-up) were automatically quantified in participants of Sydney Memory and Ageing Study (N = 605; age range = 70-92 years; 54.78% female). Linear mixed models were applied to examine the effects of vascular risk factors and cognitive consequences of the progression of WMH, as well as the sex moderation effects in the associations. Results: Total (TWMH), periventricular (PVWMH), and deep (DWMH) WMH volumes increased by 9.47%, 7.70%, and 11.78% per year, respectively. No sex differences were found in WMH progression rates. After Bonferroni correction, increases in PVWMH volumes over time were associated with decline in global cognition, especially in visuospatial and memory domains. Men with more increases in PVWMH volumes over time had greater declines in visuospatial abilities. Moreover, higher average TWMH volumes across time-points were associated with poorer average performance in processing speed and executive function domains across time. Higher average PVWMH volumes across time-points were also associated with worse average performance in the executive function domain over time, among women but not men. Conclusion: The findings highlighted sex differences in the associations between WMH progression and cognition decline over time, suggesting sex-specific strategies in managing WMH accumulations in ageing. Keywords: Cerebral small vessel disease, white matter hyperintensities, sex differences, brain ageing, cognitive decline.
2302.03752
Markus Adamek
Markus Adamek (1 and 2) and Alexander P Rockhill (4) and Peter Brunner (1 and 2) and Dora Hermes (3) ((1) Department of Neurosurgery, Washington University in Saint Louis MO USA, (2) National Center for Adaptive Neurotechnologies Albany NY USA, (3) Department of Physiology & Biomedical Engineering Mayo Clinic Rochester MN USA, (4) University of Oregon Department of Human Physiology Eugene OR USA)
Dynamic Visualization of Gyral and Sulcal Stereoelectroencephalographic contacts in Humans
submitted as a contributed paper to EMBC 2023
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Stereoelectroencephalography (SEEG) is a neurosurgical method to survey electrophysiological activity within the brain to treat disorders such as Epilepsy. In this stereotactic approach, leads are implanted through straight trajectories to survey both cortical and sub-cortical activity. Visualizing the recorded locations covering sulcal and gyral activity while staying true to the cortical architecture is challenging due to the folded, three-dimensional nature of the human cortex. To overcome this challenge, we developed a novel visualization concept, allowing investigators to dynamically morph between the subjects' cortical reconstruction and an inflated cortex representation. This inflated view, in which gyri and sulci are viewed on a smooth surface, allows better visualization of electrodes buried within the sulcus while staying true to the underlying cortical architecture.
[ { "created": "Tue, 7 Feb 2023 20:57:15 GMT", "version": "v1" } ]
2023-02-09
[ [ "Adamek", "Markus", "", "1 and 2" ], [ "Rockhill", "Alexander P", "", "1 and 2" ], [ "Brunner", "Peter", "", "1 and 2" ], [ "Hermes", "Dora", "" ] ]
Stereoelectroencephalography (SEEG) is a neurosurgical method to survey electrophysiological activity within the brain to treat disorders such as Epilepsy. In this stereotactic approach, leads are implanted through straight trajectories to survey both cortical and sub-cortical activity. Visualizing the recorded locations covering sulcal and gyral activity while staying true to the cortical architecture is challenging due to the folded, three-dimensional nature of the human cortex. To overcome this challenge, we developed a novel visualization concept, allowing investigators to dynamically morph between the subjects' cortical reconstruction and an inflated cortex representation. This inflated view, in which gyri and sulci are viewed on a smooth surface, allows better visualization of electrodes buried within the sulcus while staying true to the underlying cortical architecture.
1406.6641
Augusto Gonzalez
Augusto Gonzalez
Mutagenesis and Background Neutron Radiation
submitted to Revista Cubana de Fisica
Rev. Cub. Fis. 31 (2014) 71
null
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We suggest a possible correlation between the ionization events caused by the background neutron radiation and the experimental data on mutations with damage in the DNA repair mechanism, coming from the Long Term Evolution Experiment in E. Coli populations.
[ { "created": "Wed, 25 Jun 2014 17:03:42 GMT", "version": "v1" }, { "created": "Fri, 27 Jun 2014 13:30:51 GMT", "version": "v2" } ]
2015-04-28
[ [ "Gonzalez", "Augusto", "" ] ]
We suggest a possible correlation between the ionization events caused by the background neutron radiation and the experimental data on mutations with damage in the DNA repair mechanism, coming from the Long Term Evolution Experiment in E. Coli populations.
1802.02962
Michael Sadovsky
Michael Sadovsky, Maria Senashova, Andrew Malyshev
Eight-cluster structure of chloroplast genomes differs from similar one observed for bacteria
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Previously, a seven-cluster pattern claiming to be a universal one in bacterial genomes has been reported. Keeping in mind the most popular theory of chloroplast origin, we checked whether a similar pattern is observed in chloroplast genomes. Surprisingly, eight cluster structure has been found, for chloroplasts. The pattern observed for chloroplasts differs rather significantly, from bacterial one, and from that latter observed for cyanobacteria. The structure is provided by clustering of the fragments of equal length isolated within a genome so that each fragment is converted in triplet frequency dictionary with non-overlapping triplets with no gaps in frame tiling. The points in 63-dimensional space were clustered due to elastic map technique. The eight cluster found in chloroplasts comprises the fragments of a genome bearing tRNA genes and exhibiting excessively high $\mathsf{GC}$-content, in comparison to the entire genome.
[ { "created": "Thu, 8 Feb 2018 16:49:26 GMT", "version": "v1" } ]
2018-02-09
[ [ "Sadovsky", "Michael", "" ], [ "Senashova", "Maria", "" ], [ "Malyshev", "Andrew", "" ] ]
Previously, a seven-cluster pattern claiming to be a universal one in bacterial genomes has been reported. Keeping in mind the most popular theory of chloroplast origin, we checked whether a similar pattern is observed in chloroplast genomes. Surprisingly, eight cluster structure has been found, for chloroplasts. The pattern observed for chloroplasts differs rather significantly, from bacterial one, and from that latter observed for cyanobacteria. The structure is provided by clustering of the fragments of equal length isolated within a genome so that each fragment is converted in triplet frequency dictionary with non-overlapping triplets with no gaps in frame tiling. The points in 63-dimensional space were clustered due to elastic map technique. The eight cluster found in chloroplasts comprises the fragments of a genome bearing tRNA genes and exhibiting excessively high $\mathsf{GC}$-content, in comparison to the entire genome.
2401.11192
Jos\'e Manuel Peula-Garc\'ia
Paola Sanchez-Moreno, Juan Luis Ortega-Vinuesa, Jose Manuel Peula-Garcia, Juan Antonio Marchal and Houria Boulaiz
Smart Drug-Delivery Systems for Cancer Nanotherapy
Preprint version, 25 pages, 7 figures, 3 tables. Authors thank to Bentham Science the posibility of deposit the ACCEPTED VERSION of the peer-reviewed article after 12 months of publication on journal web site on arXiv repository. The published manuscript is available at EurekaSelect via https://www.eurekaselect.com/openurl/content.php?genre=article&doi=10.2174/1389450117666160527142544
Current Drug Targets, 2018, 19(4), 339
10.2174/1389450117666160527142544
null
q-bio.TO cond-mat.mes-hall physics.app-ph physics.bio-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
Despite all the advances achieved in the field of tumor-biology research, in most cases conventional therapies including chemotherapy are still the leading choices. The main disadvantage of these treatments, in addition to the low solubility of many antitumor drugs, is their lack of specificity, which explains the frequent occurrence of serious side effects due to nonspecific drug uptake by healthy cells. Progress in nanotechnology and its application in medicine have provided new opportunities and different smart systems. Such systems can improve the intracellular delivery of the drugs due to their multifunctionality and targeting potential. The purpose of this manuscript is to review and analyze the recent progress made in nanotherapy applied to cancer treatment. First, we provide a global overview of cancer and different smart nanoparticles currently used in oncology. Then, we analyze in detail the development of drug-delivery strategies in cancer therapy, focusing mainly on the intravenously administered smart nanoparticles with protein corona to avoid immune-system clearance. Finally, we discuss the challenges, clinical trials, and future directions of the nanoparticle-based therapy in cancer.
[ { "created": "Sat, 20 Jan 2024 10:03:56 GMT", "version": "v1" } ]
2024-01-25
[ [ "Sanchez-Moreno", "Paola", "" ], [ "Ortega-Vinuesa", "Juan Luis", "" ], [ "Peula-Garcia", "Jose Manuel", "" ], [ "Marchal", "Juan Antonio", "" ], [ "Boulaiz", "Houria", "" ] ]
Despite all the advances achieved in the field of tumor-biology research, in most cases conventional therapies including chemotherapy are still the leading choices. The main disadvantage of these treatments, in addition to the low solubility of many antitumor drugs, is their lack of specificity, which explains the frequent occurrence of serious side effects due to nonspecific drug uptake by healthy cells. Progress in nanotechnology and its application in medicine have provided new opportunities and different smart systems. Such systems can improve the intracellular delivery of the drugs due to their multifunctionality and targeting potential. The purpose of this manuscript is to review and analyze the recent progress made in nanotherapy applied to cancer treatment. First, we provide a global overview of cancer and different smart nanoparticles currently used in oncology. Then, we analyze in detail the development of drug-delivery strategies in cancer therapy, focusing mainly on the intravenously administered smart nanoparticles with protein corona to avoid immune-system clearance. Finally, we discuss the challenges, clinical trials, and future directions of the nanoparticle-based therapy in cancer.
1204.5421
Luis Enrique Correa da Rocha Mr
Luis Enrique Correa Rocha, Adeline Decuyper, Vincent D Blondel
Epidemics on a stochastic model of temporal network
null
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contacts between individuals serve as pathways where infections may propagate. These contact patterns can be represented by network structures. Static structures have been the common modeling paradigm but recent results suggest that temporal structures play different roles to regulate the spread of infections or infection-like dynamics. On temporal networks a vertex is active only at certain moments and inactive otherwise such that a contact is not continuously available. In several empirical networks, the time between two consecutive vertex-activation events typically follows heterogeneous activity (e.g. bursts). In this chapter, we present a simple and intuitive stochastic model of a temporal network and investigate how epidemics co-evolves with the temporal structures, focusing on the growth dynamics of the epidemics. The model assumes no underlying topological structure and is only constrained by the time between two consecutive events of vertex activation. The main observation is that the speed of the infection spread is different in case of heterogeneous and homogeneous temporal patterns but the differences depend on the stage of the epidemics. In comparison to the homogeneous scenario, the power law case results in a faster growth in the beginning but turns out to be slower after a certain time, taking several time steps to reach the whole network.
[ { "created": "Tue, 24 Apr 2012 16:11:55 GMT", "version": "v1" } ]
2012-04-25
[ [ "Rocha", "Luis Enrique Correa", "" ], [ "Decuyper", "Adeline", "" ], [ "Blondel", "Vincent D", "" ] ]
Contacts between individuals serve as pathways where infections may propagate. These contact patterns can be represented by network structures. Static structures have been the common modeling paradigm but recent results suggest that temporal structures play different roles to regulate the spread of infections or infection-like dynamics. On temporal networks a vertex is active only at certain moments and inactive otherwise such that a contact is not continuously available. In several empirical networks, the time between two consecutive vertex-activation events typically follows heterogeneous activity (e.g. bursts). In this chapter, we present a simple and intuitive stochastic model of a temporal network and investigate how epidemics co-evolves with the temporal structures, focusing on the growth dynamics of the epidemics. The model assumes no underlying topological structure and is only constrained by the time between two consecutive events of vertex activation. The main observation is that the speed of the infection spread is different in case of heterogeneous and homogeneous temporal patterns but the differences depend on the stage of the epidemics. In comparison to the homogeneous scenario, the power law case results in a faster growth in the beginning but turns out to be slower after a certain time, taking several time steps to reach the whole network.
1012.1974
Thomas House
Thomas House and Matt J Keeling
Epidemic prediction and control in clustered populations
13 pages, 3 figures, to appear in the Journal of Theoretical Biology
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There has been much recent interest in modelling epidemics on networks, particularly in the presence of substantial clustering. Here, we develop pairwise methods to answer questions that are often addressed using epidemic models, in particular: on the basis of potential observations early in an outbreak, what can be predicted about the epidemic outcomes and the levels of intervention necessary to control the epidemic? We find that while some results are independent of the level of clustering (early growth predicts the level of `leaky' vaccine needed for control and peak time, while the basic reproductive ratio predicts the random vaccination threshold) the relationship between other quantities is very sensitive to clustering.
[ { "created": "Thu, 9 Dec 2010 10:59:24 GMT", "version": "v1" } ]
2010-12-10
[ [ "House", "Thomas", "" ], [ "Keeling", "Matt J", "" ] ]
There has been much recent interest in modelling epidemics on networks, particularly in the presence of substantial clustering. Here, we develop pairwise methods to answer questions that are often addressed using epidemic models, in particular: on the basis of potential observations early in an outbreak, what can be predicted about the epidemic outcomes and the levels of intervention necessary to control the epidemic? We find that while some results are independent of the level of clustering (early growth predicts the level of `leaky' vaccine needed for control and peak time, while the basic reproductive ratio predicts the random vaccination threshold) the relationship between other quantities is very sensitive to clustering.
1709.09125
Lee Worden
Lee Worden
Regulation of community functional composition across taxonomic variation by resource-consumer dynamics
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High-throughput sequencing techniques such as metagenomic and metatranscriptomic technologies allow cataloging of functional characteristics of microbial community members as well as their taxonomic identity. Such studies have found that a community's composition in terms of ecologically relevant functional traits or guilds can be conserved more strictly across varying settings than taxonomic composition is. I use a standard ecological resource-consumer model to examine the dynamics of traits relevant to resource consumption, and analyze determinants of functional composition. This model demonstrates that interaction with essential resources can regulate the community-wide abundance of ecologically relevant traits, keeping them at consistent levels despite large changes in the abundances of the species housing those traits in response to changes in the environment, and across variation between communities in species composition. Functional composition is shown to be able to track differences in environmental conditions faithfully across differences in community composition. Mathematical conditions on consumers' vital rates and functional responses sufficient to produce conservation of functional community structure across taxonomic differences are presented.
[ { "created": "Tue, 26 Sep 2017 16:50:40 GMT", "version": "v1" }, { "created": "Fri, 21 Jun 2019 09:15:47 GMT", "version": "v2" } ]
2019-06-24
[ [ "Worden", "Lee", "" ] ]
High-throughput sequencing techniques such as metagenomic and metatranscriptomic technologies allow cataloging of functional characteristics of microbial community members as well as their taxonomic identity. Such studies have found that a community's composition in terms of ecologically relevant functional traits or guilds can be conserved more strictly across varying settings than taxonomic composition is. I use a standard ecological resource-consumer model to examine the dynamics of traits relevant to resource consumption, and analyze determinants of functional composition. This model demonstrates that interaction with essential resources can regulate the community-wide abundance of ecologically relevant traits, keeping them at consistent levels despite large changes in the abundances of the species housing those traits in response to changes in the environment, and across variation between communities in species composition. Functional composition is shown to be able to track differences in environmental conditions faithfully across differences in community composition. Mathematical conditions on consumers' vital rates and functional responses sufficient to produce conservation of functional community structure across taxonomic differences are presented.
2406.08521
Aakash Tripathi
Asim Waqas, Aakash Tripathi, Paul Stewart, Mia Naeini, Ghulam Rasool
Embedding-based Multimodal Learning on Pan-Squamous Cell Carcinomas for Improved Survival Outcomes
null
null
null
null
q-bio.CB cs.LG
http://creativecommons.org/licenses/by/4.0/
Cancer clinics capture disease data at various scales, from genetic to organ level. Current bioinformatic methods struggle to handle the heterogeneous nature of this data, especially with missing modalities. We propose PARADIGM, a Graph Neural Network (GNN) framework that learns from multimodal, heterogeneous datasets to improve clinical outcome prediction. PARADIGM generates embeddings from multi-resolution data using foundation models, aggregates them into patient-level representations, fuses them into a unified graph, and enhances performance for tasks like survival analysis. We train GNNs on pan-Squamous Cell Carcinomas and validate our approach on Moffitt Cancer Center lung SCC data. Multimodal GNN outperforms other models in patient survival prediction. Converging individual data modalities across varying scales provides a more insightful disease view. Our solution aims to understand the patient's circumstances comprehensively, offering insights on heterogeneous data integration and the benefits of converging maximum data views.
[ { "created": "Tue, 11 Jun 2024 22:19:14 GMT", "version": "v1" } ]
2024-06-14
[ [ "Waqas", "Asim", "" ], [ "Tripathi", "Aakash", "" ], [ "Stewart", "Paul", "" ], [ "Naeini", "Mia", "" ], [ "Rasool", "Ghulam", "" ] ]
Cancer clinics capture disease data at various scales, from genetic to organ level. Current bioinformatic methods struggle to handle the heterogeneous nature of this data, especially with missing modalities. We propose PARADIGM, a Graph Neural Network (GNN) framework that learns from multimodal, heterogeneous datasets to improve clinical outcome prediction. PARADIGM generates embeddings from multi-resolution data using foundation models, aggregates them into patient-level representations, fuses them into a unified graph, and enhances performance for tasks like survival analysis. We train GNNs on pan-Squamous Cell Carcinomas and validate our approach on Moffitt Cancer Center lung SCC data. Multimodal GNN outperforms other models in patient survival prediction. Converging individual data modalities across varying scales provides a more insightful disease view. Our solution aims to understand the patient's circumstances comprehensively, offering insights on heterogeneous data integration and the benefits of converging maximum data views.
1601.00979
Mehrashk Meidani
Mehrashk Meidani
Fracture toughness of leaves: Overview and observations
null
null
null
null
q-bio.TO cond-mat.mtrl-sci physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One might ask why is it important to know the mechanism of fracture in leaves when Mother Nature is doing her job perfectly. I could list the following reasons to address that question: (a) Leaves are natural composite structures, during millions of years of evolution, they have adapted themselves to their surrounding environment and their design is optimized, one can apply the knowledge gained from studying the fracture mechanism of leaves to the development of new composite materials; (b) Other soft tissues like skin and blood vessel have similar structure at some scales and may possess the same fracture mechanism. The gained knowledge can also be applied to these materials; (c) Global need for food is skyrocketing. There are few countries, including the United States, that have all the potentials (i.e. water, soil, sunlight, and manpower) to play a major role in the future world food supplying market. If we can increase the output of our farms and forests, by means of protecting them against herbivores [Beck 1965], pathogens [Campbell et al. 1980], and other physical damages, our share of the future market will be higher. It will also enforce our national food security because we will not be dependent on food import. We do not yet know how much of our farms and forests output can be saved if we can genetically design tougher materials, but the whole idea does worth to be studied.
[ { "created": "Mon, 21 Dec 2015 20:40:19 GMT", "version": "v1" }, { "created": "Thu, 7 Jan 2016 04:40:47 GMT", "version": "v2" } ]
2016-01-08
[ [ "Meidani", "Mehrashk", "" ] ]
One might ask why is it important to know the mechanism of fracture in leaves when Mother Nature is doing her job perfectly. I could list the following reasons to address that question: (a) Leaves are natural composite structures, during millions of years of evolution, they have adapted themselves to their surrounding environment and their design is optimized, one can apply the knowledge gained from studying the fracture mechanism of leaves to the development of new composite materials; (b) Other soft tissues like skin and blood vessel have similar structure at some scales and may possess the same fracture mechanism. The gained knowledge can also be applied to these materials; (c) Global need for food is skyrocketing. There are few countries, including the United States, that have all the potentials (i.e. water, soil, sunlight, and manpower) to play a major role in the future world food supplying market. If we can increase the output of our farms and forests, by means of protecting them against herbivores [Beck 1965], pathogens [Campbell et al. 1980], and other physical damages, our share of the future market will be higher. It will also enforce our national food security because we will not be dependent on food import. We do not yet know how much of our farms and forests output can be saved if we can genetically design tougher materials, but the whole idea does worth to be studied.
2212.06529
Karoline Leiberg
Karoline Leiberg, Jane de Tisi, John S Duncan, Bethany Little, Peter N Taylor, Sjoerd B Vos, Gavin P Winston, Bruno Mota and Yujiang Wang
Effects of anterior temporal lobe resection on cortical morphology
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Anterior temporal lobe resection (ATLR) is a surgical procedure to treat drug-resistant temporal lobe epilepsy (TLE). Resection may involve large amounts of cortical tissue. Here, we examine the effects of this surgery on cortical morphology measured in independent variables both near the resection and remotely. We studied 101 individuals with TLE (55 left, 46 right onset) who underwent ATLR. For each individual we considered one pre-surgical MRI and one follow-up MRI 2 to 13 months after surgery. We used our newly developed surface-based method to locally compute traditional morphological variables (average cortical thickness, exposed surface area, and total surface area), and the independent measures $K$, $I$, and $S$, where $K$ measures white matter tension, $I$ captures isometric scaling, and $S$ contains the remaining information about cortical shape. Data from 924 healthy controls was included to account for healthy ageing effects occurring during scans. A SurfStat random field theory clustering approach assessed changes across the cortex caused by ATLR. Compared to preoperative data, surgery had marked effects on all morphological measures. Ipsilateral effects were located in the orbitofrontal and inferior frontal gyri, the pre- and postcentral gyri and supramarginal gyrus, and the lateral occipital gyrus and lingual cortex. Contralateral effects were in the lateral occipital gyrus, and inferior frontal gyrus and frontal pole. The restructuring following ATLR is reflected in widespread morphological changes, mainly in regions near the resection, but also remotely in regions that are structurally connected to the anterior temporal lobe. The causes could include mechanical effects, Wallerian degeneration, or compensatory plasticity. The study of independent measures revealed additional effects compared to traditional measures.
[ { "created": "Tue, 13 Dec 2022 12:26:12 GMT", "version": "v1" } ]
2022-12-14
[ [ "Leiberg", "Karoline", "" ], [ "de Tisi", "Jane", "" ], [ "Duncan", "John S", "" ], [ "Little", "Bethany", "" ], [ "Taylor", "Peter N", "" ], [ "Vos", "Sjoerd B", "" ], [ "Winston", "Gavin P", "" ], [ "Mota", "Bruno", "" ], [ "Wang", "Yujiang", "" ] ]
Anterior temporal lobe resection (ATLR) is a surgical procedure to treat drug-resistant temporal lobe epilepsy (TLE). Resection may involve large amounts of cortical tissue. Here, we examine the effects of this surgery on cortical morphology measured in independent variables both near the resection and remotely. We studied 101 individuals with TLE (55 left, 46 right onset) who underwent ATLR. For each individual we considered one pre-surgical MRI and one follow-up MRI 2 to 13 months after surgery. We used our newly developed surface-based method to locally compute traditional morphological variables (average cortical thickness, exposed surface area, and total surface area), and the independent measures $K$, $I$, and $S$, where $K$ measures white matter tension, $I$ captures isometric scaling, and $S$ contains the remaining information about cortical shape. Data from 924 healthy controls was included to account for healthy ageing effects occurring during scans. A SurfStat random field theory clustering approach assessed changes across the cortex caused by ATLR. Compared to preoperative data, surgery had marked effects on all morphological measures. Ipsilateral effects were located in the orbitofrontal and inferior frontal gyri, the pre- and postcentral gyri and supramarginal gyrus, and the lateral occipital gyrus and lingual cortex. Contralateral effects were in the lateral occipital gyrus, and inferior frontal gyrus and frontal pole. The restructuring following ATLR is reflected in widespread morphological changes, mainly in regions near the resection, but also remotely in regions that are structurally connected to the anterior temporal lobe. The causes could include mechanical effects, Wallerian degeneration, or compensatory plasticity. The study of independent measures revealed additional effects compared to traditional measures.
2212.14041
Cheng Tan
Cheng Tan, Zhangyang Gao, Hanqun Cao, Xingran Chen, Ge Wang, Lirong Wu, Jun Xia, Jiangbin Zheng, Stan Z. Li
Deciphering RNA Secondary Structure Prediction: A Probabilistic K-Rook Matching Perspective
Accepted by ICML 2024
null
null
null
q-bio.BM cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The secondary structure of ribonucleic acid (RNA) is more stable and accessible in the cell than its tertiary structure, making it essential for functional prediction. Although deep learning has shown promising results in this field, current methods suffer from poor generalization and high complexity. In this work, we reformulate the RNA secondary structure prediction as a K-Rook problem, thereby simplifying the prediction process into probabilistic matching within a finite solution space. Building on this innovative perspective, we introduce RFold, a simple yet effective method that learns to predict the most matching K-Rook solution from the given sequence. RFold employs a bi-dimensional optimization strategy that decomposes the probabilistic matching problem into row-wise and column-wise components to reduce the matching complexity, simplifying the solving process while guaranteeing the validity of the output. Extensive experiments demonstrate that RFold achieves competitive performance and about eight times faster inference efficiency than the state-of-the-art approaches. The code and Colab demo are available in (http://github.com/A4Bio/RFold).
[ { "created": "Fri, 2 Dec 2022 16:34:56 GMT", "version": "v1" }, { "created": "Thu, 27 Apr 2023 12:26:17 GMT", "version": "v2" }, { "created": "Fri, 24 May 2024 12:05:40 GMT", "version": "v3" }, { "created": "Fri, 31 May 2024 14:18:31 GMT", "version": "v4" }, { "created": "Wed, 19 Jun 2024 11:08:23 GMT", "version": "v5" } ]
2024-06-21
[ [ "Tan", "Cheng", "" ], [ "Gao", "Zhangyang", "" ], [ "Cao", "Hanqun", "" ], [ "Chen", "Xingran", "" ], [ "Wang", "Ge", "" ], [ "Wu", "Lirong", "" ], [ "Xia", "Jun", "" ], [ "Zheng", "Jiangbin", "" ], [ "Li", "Stan Z.", "" ] ]
The secondary structure of ribonucleic acid (RNA) is more stable and accessible in the cell than its tertiary structure, making it essential for functional prediction. Although deep learning has shown promising results in this field, current methods suffer from poor generalization and high complexity. In this work, we reformulate the RNA secondary structure prediction as a K-Rook problem, thereby simplifying the prediction process into probabilistic matching within a finite solution space. Building on this innovative perspective, we introduce RFold, a simple yet effective method that learns to predict the most matching K-Rook solution from the given sequence. RFold employs a bi-dimensional optimization strategy that decomposes the probabilistic matching problem into row-wise and column-wise components to reduce the matching complexity, simplifying the solving process while guaranteeing the validity of the output. Extensive experiments demonstrate that RFold achieves competitive performance and about eight times faster inference efficiency than the state-of-the-art approaches. The code and Colab demo are available in (http://github.com/A4Bio/RFold).
2111.07776
Alex McAvoy
Alex McAvoy, Yoichiro Mori, Joshua B. Plotkin
Selfish optimization and collective learning in populations
33 pages; final version
null
10.1016/j.physd.2022.133426
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A selfish learner seeks to maximize their own success, disregarding others. When success is measured as payoff in a game played against another learner, mutual selfishness typically fails to produce the optimal outcome for a pair of individuals. However, learners often operate in populations, and each learner may have a limited duration of interaction with any other individual. Here, we compare selfish learning in stable pairs to selfish learning with stochastic encounters in a population. We study gradient-based optimization in repeated games like the prisoner's dilemma, which feature multiple Nash equilibria, many of which are suboptimal. We find that myopic, selfish learning, when distributed in a population via ephemeral encounters, can reverse the dynamics that occur in stable pairs. In particular, when there is flexibility in partner choice, selfish learning in large populations can produce optimal payoffs in repeated social dilemmas. This result holds for the entire population, not just for a small subset of individuals. Furthermore, as the population size grows, the timescale to reach the optimal population payoff remains finite in the number of learning steps per individual. While it is not universally true that interacting with many partners in a population improves outcomes, this form of collective learning achieves optimality for several important classes of social dilemmas. We conclude that na\"{i}ve learning can be surprisingly effective in populations of individuals navigating conflicts of interest.
[ { "created": "Mon, 15 Nov 2021 14:17:07 GMT", "version": "v1" }, { "created": "Wed, 6 Jul 2022 15:53:03 GMT", "version": "v2" } ]
2022-07-07
[ [ "McAvoy", "Alex", "" ], [ "Mori", "Yoichiro", "" ], [ "Plotkin", "Joshua B.", "" ] ]
A selfish learner seeks to maximize their own success, disregarding others. When success is measured as payoff in a game played against another learner, mutual selfishness typically fails to produce the optimal outcome for a pair of individuals. However, learners often operate in populations, and each learner may have a limited duration of interaction with any other individual. Here, we compare selfish learning in stable pairs to selfish learning with stochastic encounters in a population. We study gradient-based optimization in repeated games like the prisoner's dilemma, which feature multiple Nash equilibria, many of which are suboptimal. We find that myopic, selfish learning, when distributed in a population via ephemeral encounters, can reverse the dynamics that occur in stable pairs. In particular, when there is flexibility in partner choice, selfish learning in large populations can produce optimal payoffs in repeated social dilemmas. This result holds for the entire population, not just for a small subset of individuals. Furthermore, as the population size grows, the timescale to reach the optimal population payoff remains finite in the number of learning steps per individual. While it is not universally true that interacting with many partners in a population improves outcomes, this form of collective learning achieves optimality for several important classes of social dilemmas. We conclude that na\"{i}ve learning can be surprisingly effective in populations of individuals navigating conflicts of interest.
1607.00697
Nora Youngs
Elizabeth Gross, Nida Kazi Obatake, Nora Youngs
Neural ideals and stimulus space visualization
null
null
null
null
q-bio.NC cs.GR math.AC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A neural code $\mathcal{C}$ is a collection of binary vectors of a given length n that record the co-firing patterns of a set of neurons. Our focus is on neural codes arising from place cells, neurons that respond to geographic stimulus. In this setting, the stimulus space can be visualized as subset of $\mathbb{R}^2$ covered by a collection $\mathcal{U}$ of convex sets such that the arrangement $\mathcal{U}$ forms an Euler diagram for $\mathcal{C}$. There are some methods to determine whether such a convex realization $\mathcal{U}$ exists; however, these methods do not describe how to draw a realization. In this work, we look at the problem of algorithmically drawing Euler diagrams for neural codes using two polynomial ideals: the neural ideal, a pseudo-monomial ideal; and the neural toric ideal, a binomial ideal. In particular, we study how these objects are related to the theory of piercings in information visualization, and we show how minimal generating sets of the ideals reveal whether or not a code is $0$, $1$, or $2$-inductively pierced.
[ { "created": "Sun, 3 Jul 2016 22:54:11 GMT", "version": "v1" } ]
2016-07-05
[ [ "Gross", "Elizabeth", "" ], [ "Obatake", "Nida Kazi", "" ], [ "Youngs", "Nora", "" ] ]
A neural code $\mathcal{C}$ is a collection of binary vectors of a given length n that record the co-firing patterns of a set of neurons. Our focus is on neural codes arising from place cells, neurons that respond to geographic stimulus. In this setting, the stimulus space can be visualized as subset of $\mathbb{R}^2$ covered by a collection $\mathcal{U}$ of convex sets such that the arrangement $\mathcal{U}$ forms an Euler diagram for $\mathcal{C}$. There are some methods to determine whether such a convex realization $\mathcal{U}$ exists; however, these methods do not describe how to draw a realization. In this work, we look at the problem of algorithmically drawing Euler diagrams for neural codes using two polynomial ideals: the neural ideal, a pseudo-monomial ideal; and the neural toric ideal, a binomial ideal. In particular, we study how these objects are related to the theory of piercings in information visualization, and we show how minimal generating sets of the ideals reveal whether or not a code is $0$, $1$, or $2$-inductively pierced.
1104.5278
Parin Sripakdeevong
Parin Sripakdeevong, Wipapat Kladwang, Rhiju Das
Can biopolymer structures be sampled enumeratively? Atomic-accuracy RNA loop modeling by a stepwise ansatz
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Atomic-accuracy structure prediction of macromolecules is a long-sought goal of computational biophysics. Accurate modeling should be achievable by optimizing a physically realistic energy function but is presently precluded by incomplete sampling of a biopolymer's many degrees of freedom. We present herein a working hypothesis, called the "stepwise ansatz", for recursively constructing well-packed atomic-detail models in small steps, enumerating several million conformations for each monomer and covering all build-up paths. By implementing the strategy in Rosetta and making use of high-performance computing, we provide first tests of this hypothesis on a benchmark of fifteen RNA loop modeling problems drawn from riboswitches, ribozymes, and the ribosome, including ten cases that were not solvable by prior knowledge based modeling approaches. For each loop problem, this deterministic stepwise assembly (SWA) method either reaches atomic accuracy or exposes flaws in Rosetta's all-atom energy function, indicating the resolution of the conformational sampling bottleneck. To our knowledge, SWA is the first enumerative, ab initio build-up method to systematically outperform existing Monte Carlo and knowledge-based methods for 3D structure prediction. As a rigorous experimental test, we have applied SWA to a small RNA motif of previously unknown structure, the C7.2 tetraloop/tetraloop-receptor, and stringently tested this blind prediction with nucleotide-resolution structure mapping data.
[ { "created": "Thu, 28 Apr 2011 02:15:07 GMT", "version": "v1" } ]
2011-04-29
[ [ "Sripakdeevong", "Parin", "" ], [ "Kladwang", "Wipapat", "" ], [ "Das", "Rhiju", "" ] ]
Atomic-accuracy structure prediction of macromolecules is a long-sought goal of computational biophysics. Accurate modeling should be achievable by optimizing a physically realistic energy function but is presently precluded by incomplete sampling of a biopolymer's many degrees of freedom. We present herein a working hypothesis, called the "stepwise ansatz", for recursively constructing well-packed atomic-detail models in small steps, enumerating several million conformations for each monomer and covering all build-up paths. By implementing the strategy in Rosetta and making use of high-performance computing, we provide first tests of this hypothesis on a benchmark of fifteen RNA loop modeling problems drawn from riboswitches, ribozymes, and the ribosome, including ten cases that were not solvable by prior knowledge based modeling approaches. For each loop problem, this deterministic stepwise assembly (SWA) method either reaches atomic accuracy or exposes flaws in Rosetta's all-atom energy function, indicating the resolution of the conformational sampling bottleneck. To our knowledge, SWA is the first enumerative, ab initio build-up method to systematically outperform existing Monte Carlo and knowledge-based methods for 3D structure prediction. As a rigorous experimental test, we have applied SWA to a small RNA motif of previously unknown structure, the C7.2 tetraloop/tetraloop-receptor, and stringently tested this blind prediction with nucleotide-resolution structure mapping data.
1604.04553
Pieter Trapman
Pieter Trapman, Frank Ball, Jean-St\'ephane Dhersin, Viet Chi Tran, Jacco Wallinga and Tom Britton
Inferring $R_0$ in emerging epidemics - the effect of common population structure is small
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When controlling an emerging outbreak of an infectious disease it is essential to know the key epidemiological parameters, such as the basic reproduction number $R_0$ and the control effort required to prevent a large outbreak. These parameters are estimated from the observed incidence of new cases and information about the infectious contact structures of the population in which the disease spreads. However, the relevant infectious contact structures for new, emerging infections are often unknown or hard to obtain. Here we show that for many common true underlying heterogeneous contact structures, the simplification to neglect such structures and instead assume that all contacts are made homogeneously in the whole population, results in conservative estimates for $R_0$ and the required control effort. This means that robust control policies can be planned during the early stages of an outbreak, using such conservative estimates of the required control effort.
[ { "created": "Fri, 15 Apr 2016 16:06:53 GMT", "version": "v1" } ]
2016-04-18
[ [ "Trapman", "Pieter", "" ], [ "Ball", "Frank", "" ], [ "Dhersin", "Jean-Stéphane", "" ], [ "Tran", "Viet Chi", "" ], [ "Wallinga", "Jacco", "" ], [ "Britton", "Tom", "" ] ]
When controlling an emerging outbreak of an infectious disease it is essential to know the key epidemiological parameters, such as the basic reproduction number $R_0$ and the control effort required to prevent a large outbreak. These parameters are estimated from the observed incidence of new cases and information about the infectious contact structures of the population in which the disease spreads. However, the relevant infectious contact structures for new, emerging infections are often unknown or hard to obtain. Here we show that for many common true underlying heterogeneous contact structures, the simplification to neglect such structures and instead assume that all contacts are made homogeneously in the whole population, results in conservative estimates for $R_0$ and the required control effort. This means that robust control policies can be planned during the early stages of an outbreak, using such conservative estimates of the required control effort.
2301.09494
Thomas Klotz
Thomas Klotz, Lena Lehmann, Francesco Negro and Oliver R\"ohrle
High-density magnetomyography is superior to high-density surface electromyography for motor unit decomposition: a simulation study
null
null
null
null
q-bio.TO q-bio.CB q-bio.NC q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Objective: Studying motor units (MUs) is essential for understanding motor control, the detection of neuromuscular disorders and the control of human-machine interfaces. Individual motor unit firings are currently identified in vivo by decomposing electromyographic (EMG) signals. Due to our body's properties and anatomy, individual motor units can only be separated to a limited extent with surface EMG. Unlike electrical signals, magnetic fields do not interact with human tissues. This physical property and the emerging technology of quantum sensors make magnetomyography (MMG) a highly promising methodology. However, the full potential of MMG to study neuromuscular physiology has not yet been explored. Approach: In this work, we perform in silico trials that combine a biophysical model of EMG and MMG with state-of-the-art algorithms for the decomposition of motor units. This allows the prediction of an upper-bound for the motor unit decomposition accuracy. Main results: It is shown that non-invasive high-density MMG data is superior over comparable high-density surface EMG data for the robust identification of the discharge patterns of individual motor units. Decomposing MMG instead of EMG increased the number of identifiable motor units by 76%. Notably, MMG exhibits a less pronounced bias to detect superficial motor units. Significance: The presented simulations provide insights into methods to study the neuromuscular system non-invasively and in vivo that would not be easily feasible by other means. Hence, this study provides guidance for the development of novel biomedical technologies.
[ { "created": "Mon, 23 Jan 2023 15:45:00 GMT", "version": "v1" }, { "created": "Fri, 30 Jun 2023 09:49:40 GMT", "version": "v2" } ]
2023-07-03
[ [ "Klotz", "Thomas", "" ], [ "Lehmann", "Lena", "" ], [ "Negro", "Francesco", "" ], [ "Röhrle", "Oliver", "" ] ]
Objective: Studying motor units (MUs) is essential for understanding motor control, the detection of neuromuscular disorders and the control of human-machine interfaces. Individual motor unit firings are currently identified in vivo by decomposing electromyographic (EMG) signals. Due to our body's properties and anatomy, individual motor units can only be separated to a limited extent with surface EMG. Unlike electrical signals, magnetic fields do not interact with human tissues. This physical property and the emerging technology of quantum sensors make magnetomyography (MMG) a highly promising methodology. However, the full potential of MMG to study neuromuscular physiology has not yet been explored. Approach: In this work, we perform in silico trials that combine a biophysical model of EMG and MMG with state-of-the-art algorithms for the decomposition of motor units. This allows the prediction of an upper-bound for the motor unit decomposition accuracy. Main results: It is shown that non-invasive high-density MMG data is superior over comparable high-density surface EMG data for the robust identification of the discharge patterns of individual motor units. Decomposing MMG instead of EMG increased the number of identifiable motor units by 76%. Notably, MMG exhibits a less pronounced bias to detect superficial motor units. Significance: The presented simulations provide insights into methods to study the neuromuscular system non-invasively and in vivo that would not be easily feasible by other means. Hence, this study provides guidance for the development of novel biomedical technologies.
1809.07734
Tal Einav
Tal Einav, Shahrzad Yazdi, Aaron Coey, Pamela J. Bjorkman, Rob Phillips
Harnessing Avidity: Quantifying Entropic and Energetic Effects of Linker Length and Rigidity Required for Multivalent Binding of Antibodies to HIV-1 Spikes
null
null
10.1016/j.cels.2019.09.007
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Due to the low density of envelope (Env) spikes on the surface of HIV-1, neutralizing IgG antibodies rarely bind bivalently using both antigen-binding arms (Fabs) to crosslink between spikes (inter-spike crosslinking), instead resorting to weaker monovalent binding that is more sensitive to Env mutations. Synthetic antibodies designed to bivalently bind a single Env trimer (intra-spike crosslinking) were previously shown to exhibit increased neutralization potencies. In initial work, diFabs joined by varying lengths of rigid double-stranded DNA (dsDNA) were considered. Anticipating future experiments to improve synthetic antibodies, we investigate whether linkers with different rigidities could enhance diFab potency by modeling DNA-Fabs containing different combinations of rigid dsDNA and flexible single-stranded DNA (ssDNA) and characterizing their neutralization potential. Model predictions suggest that while a long flexible polymer may be capable of bivalent binding, it exhibits weak neutralization due to the large loss in entropic degrees of freedom when both Fabs are bound. In contrast, the strongest neutralization potencies are predicted to require a rigid linker that optimally spans the distance between two Fab binding sites on an Env trimer, and avidity can be further boosted by incorporating more Fabs into these constructs. These results inform the design of multivalent anti-HIV-1 therapeutics that utilize avidity effects to remain potent against HIV-1 in the face of the rapid mutation of Env spikes.
[ { "created": "Sun, 2 Sep 2018 01:19:03 GMT", "version": "v1" }, { "created": "Wed, 22 May 2019 06:27:44 GMT", "version": "v2" } ]
2022-07-27
[ [ "Einav", "Tal", "" ], [ "Yazdi", "Shahrzad", "" ], [ "Coey", "Aaron", "" ], [ "Bjorkman", "Pamela J.", "" ], [ "Phillips", "Rob", "" ] ]
Due to the low density of envelope (Env) spikes on the surface of HIV-1, neutralizing IgG antibodies rarely bind bivalently using both antigen-binding arms (Fabs) to crosslink between spikes (inter-spike crosslinking), instead resorting to weaker monovalent binding that is more sensitive to Env mutations. Synthetic antibodies designed to bivalently bind a single Env trimer (intra-spike crosslinking) were previously shown to exhibit increased neutralization potencies. In initial work, diFabs joined by varying lengths of rigid double-stranded DNA (dsDNA) were considered. Anticipating future experiments to improve synthetic antibodies, we investigate whether linkers with different rigidities could enhance diFab potency by modeling DNA-Fabs containing different combinations of rigid dsDNA and flexible single-stranded DNA (ssDNA) and characterizing their neutralization potential. Model predictions suggest that while a long flexible polymer may be capable of bivalent binding, it exhibits weak neutralization due to the large loss in entropic degrees of freedom when both Fabs are bound. In contrast, the strongest neutralization potencies are predicted to require a rigid linker that optimally spans the distance between two Fab binding sites on an Env trimer, and avidity can be further boosted by incorporating more Fabs into these constructs. These results inform the design of multivalent anti-HIV-1 therapeutics that utilize avidity effects to remain potent against HIV-1 in the face of the rapid mutation of Env spikes.
2309.04498
Marton Aron Goda Dr.
Joachim A. Behar, Jeremy Levy, Eran Zvuloni, Sheina Gendelman, Aviv Rosenberg, Shany Biton, Raphael Derman, Jonathan A. Sobel, Alexandra Alexandrovich, Peter Charlton and M\'arton \'A Goda
PhysioZoo: The Open Digital Physiological Biomarkers Resource
4 pages, 2 figure, 50th Computing in Cardiology conference in Atlanta, Georgia, USA on 1st - 4th October 2023
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
PhysioZoo is a collaborative platform designed for the analysis of continuous physiological time series. The platform currently comprises four modules, each consisting of a library, a user interface, and a set of tutorials: (1) PhysioZoo HRV, dedicated to studying heart rate variability (HRV) in humans and other mammals; (2) PhysioZoo SPO2, which focuses on the analysis of digital oximetry biomarkers (OBM) using continuous oximetry (SpO2) measurements from humans; (3) PhysioZoo ECG, dedicated to the analysis of electrocardiogram (ECG) time series; (4) PhysioZoo PPG, designed to study photoplethysmography (PPG) time series. In this proceeding, we introduce the PhysioZoo platform as an open resource for digital physiological biomarkers engineering, facilitating streamlined analysis and data visualization of physiological time series while ensuring the reproducibility of published experiments. We welcome researchers to contribute new libraries for the analysis of various physiological time series, such as electroencephalography, blood pressure, and phonocardiography. You can access the resource at physiozoo.com. We encourage researchers to explore and utilize this platform to advance their studies in the field of continuous physiological time-series analysis.
[ { "created": "Thu, 7 Sep 2023 21:38:28 GMT", "version": "v1" } ]
2023-09-12
[ [ "Behar", "Joachim A.", "" ], [ "Levy", "Jeremy", "" ], [ "Zvuloni", "Eran", "" ], [ "Gendelman", "Sheina", "" ], [ "Rosenberg", "Aviv", "" ], [ "Biton", "Shany", "" ], [ "Derman", "Raphael", "" ], [ "Sobel", "Jonathan A.", "" ], [ "Alexandrovich", "Alexandra", "" ], [ "Charlton", "Peter", "" ], [ "Goda", "Márton Á", "" ] ]
PhysioZoo is a collaborative platform designed for the analysis of continuous physiological time series. The platform currently comprises four modules, each consisting of a library, a user interface, and a set of tutorials: (1) PhysioZoo HRV, dedicated to studying heart rate variability (HRV) in humans and other mammals; (2) PhysioZoo SPO2, which focuses on the analysis of digital oximetry biomarkers (OBM) using continuous oximetry (SpO2) measurements from humans; (3) PhysioZoo ECG, dedicated to the analysis of electrocardiogram (ECG) time series; (4) PhysioZoo PPG, designed to study photoplethysmography (PPG) time series. In this proceeding, we introduce the PhysioZoo platform as an open resource for digital physiological biomarkers engineering, facilitating streamlined analysis and data visualization of physiological time series while ensuring the reproducibility of published experiments. We welcome researchers to contribute new libraries for the analysis of various physiological time series, such as electroencephalography, blood pressure, and phonocardiography. You can access the resource at physiozoo.com. We encourage researchers to explore and utilize this platform to advance their studies in the field of continuous physiological time-series analysis.
2101.06788
Tajudeen Yahaya Dr.
Tajudeen O. Yahaya and Shemishere B. Ufuoma
Genetics and Pathophysiology of Maturity-onset Diabetes of the Young (MODY): A Review of Current Trends
10 pages
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Single gene mutations have been implicated in the pathogenesis of a form of diabetes mellitus (DM) known as the maturity-onset diabetes of the young (MODY). However, there are diverse opinions on the suspect genes and pathophysiology, necessitating the need to review and communicate the genes to raise public awareness. We used the Google search engine to retrieve relevant information from reputable sources such as PubMed and Google Scholar. We identified 14 classified MODY genes as well as three new and unclassified genes linked with MODY. These genes are fundamentally embedded in the beta cells, the most common of which are HNF1A, HNF4A, HNF1B, and GCK genes. Mutations in these genes cause beta-cell dysfunction, resulting in decreased insulin production and hyperglycemia. MODY genes have distinct mechanisms of action and phenotypic presentations compared with type 1 and type 2 DM and other forms of DM. Healthcare professionals are therefore advised to formulate drugs and treatment based on the causal genes rather than the current generalized treatment for all types of DM. This will increase the effectiveness of diabetes drugs and treatment and reduce the burden of the disease.
[ { "created": "Sun, 17 Jan 2021 21:21:59 GMT", "version": "v1" } ]
2021-01-19
[ [ "Yahaya", "Tajudeen O.", "" ], [ "Ufuoma", "Shemishere B.", "" ] ]
Single gene mutations have been implicated in the pathogenesis of a form of diabetes mellitus (DM) known as the maturity-onset diabetes of the young (MODY). However, there are diverse opinions on the suspect genes and pathophysiology, necessitating the need to review and communicate the genes to raise public awareness. We used the Google search engine to retrieve relevant information from reputable sources such as PubMed and Google Scholar. We identified 14 classified MODY genes as well as three new and unclassified genes linked with MODY. These genes are fundamentally embedded in the beta cells, the most common of which are HNF1A, HNF4A, HNF1B, and GCK genes. Mutations in these genes cause beta-cell dysfunction, resulting in decreased insulin production and hyperglycemia. MODY genes have distinct mechanisms of action and phenotypic presentations compared with type 1 and type 2 DM and other forms of DM. Healthcare professionals are therefore advised to formulate drugs and treatment based on the causal genes rather than the current generalized treatment for all types of DM. This will increase the effectiveness of diabetes drugs and treatment and reduce the burden of the disease.
2007.14965
Changjiang Liu
Changjiang Liu, Paolo Elvati, Angela Violi
Antiviral Drug-Membrane Permeability: the Viral Envelope and Cellular Organelles
null
null
null
null
q-bio.BM q-bio.QM q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To shorten the time required to find effective new drugs, like antivirals, a key parameter to consider is membrane permeability, as a compound intended for an intracellular target with poor permeability will have low efficacy. Here, we present a computational model that considers both drug characteristics and membrane properties for the rapid assessment of drugs permeability through the coronavirus envelope and various cellular membranes. We analyze 79 drugs that are considered as potential candidates for the treatment of SARS-CoV-2 and determine their time of permeation in different organelle membranes grouped by viral baits and mammalian processes. The computational results are correlated with experimental data, present in the literature, on bioavailability of the drugs, showing a negative correlation between fast permeation and most promising drugs. This model represents an important tool capable of evaluating how permeability affects the ability of compounds to reach both intended and unintended intracellular targets in an accurate and rapid way. The method is general and flexible and can be employed for a variety of molecules, from small drugs to nanoparticles, as well to a variety of biological membranes.
[ { "created": "Wed, 29 Jul 2020 17:16:45 GMT", "version": "v1" } ]
2020-07-30
[ [ "Liu", "Changjiang", "" ], [ "Elvati", "Paolo", "" ], [ "Violi", "Angela", "" ] ]
To shorten the time required to find effective new drugs, like antivirals, a key parameter to consider is membrane permeability, as a compound intended for an intracellular target with poor permeability will have low efficacy. Here, we present a computational model that considers both drug characteristics and membrane properties for the rapid assessment of drugs permeability through the coronavirus envelope and various cellular membranes. We analyze 79 drugs that are considered as potential candidates for the treatment of SARS-CoV-2 and determine their time of permeation in different organelle membranes grouped by viral baits and mammalian processes. The computational results are correlated with experimental data, present in the literature, on bioavailability of the drugs, showing a negative correlation between fast permeation and most promising drugs. This model represents an important tool capable of evaluating how permeability affects the ability of compounds to reach both intended and unintended intracellular targets in an accurate and rapid way. The method is general and flexible and can be employed for a variety of molecules, from small drugs to nanoparticles, as well to a variety of biological membranes.
2106.13082
Robert Rosenbaum
Robert Rosenbaum
On the relationship between predictive coding and backpropagation
null
null
10.1371/journal.pone.0266102
null
q-bio.NC cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
Artificial neural networks are often interpreted as abstract models of biological neuronal networks, but they are typically trained using the biologically unrealistic backpropagation algorithm and its variants. Predictive coding has been proposed as a potentially more biologically realistic alternative to backpropagation for training neural networks. This manuscript reviews and extends recent work on the mathematical relationship between predictive coding and backpropagation for training feedforward artificial neural networks on supervised learning tasks. Implications of these results for the interpretation of predictive coding and deep neural networks as models of biological learning are discussed along with a repository of functions, Torch2PC, for performing predictive coding with PyTorch neural network models.
[ { "created": "Sun, 20 Jun 2021 18:22:50 GMT", "version": "v1" }, { "created": "Fri, 25 Jun 2021 11:13:15 GMT", "version": "v2" }, { "created": "Thu, 21 Oct 2021 20:19:55 GMT", "version": "v3" }, { "created": "Fri, 18 Feb 2022 12:28:20 GMT", "version": "v4" }, { "created": "Tue, 16 Apr 2024 19:10:00 GMT", "version": "v5" }, { "created": "Tue, 23 Apr 2024 19:17:10 GMT", "version": "v6" } ]
2024-04-25
[ [ "Rosenbaum", "Robert", "" ] ]
Artificial neural networks are often interpreted as abstract models of biological neuronal networks, but they are typically trained using the biologically unrealistic backpropagation algorithm and its variants. Predictive coding has been proposed as a potentially more biologically realistic alternative to backpropagation for training neural networks. This manuscript reviews and extends recent work on the mathematical relationship between predictive coding and backpropagation for training feedforward artificial neural networks on supervised learning tasks. Implications of these results for the interpretation of predictive coding and deep neural networks as models of biological learning are discussed along with a repository of functions, Torch2PC, for performing predictive coding with PyTorch neural network models.
1212.4125
Katarzyna Bryc Katarzyna Bryc
Katarzyna Bryc, Nick Patterson, and David Reich
Estimating heterozygosity from a low-coverage genome sequence, leveraging data from other individuals sequenced at the same sites
18 pages, 4 figures, 1 table
null
null
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High-throughput shotgun sequence data makes it possible in principle to accurately estimate population genetic parameters without confounding by SNP ascertainment bias. One such statistic of interest is the proportion of heterozygous sites within an individual's genome, which is informative about inbreeding and effective population size. However, in many cases, the available sequence data of an individual is limited to low coverage, preventing the confident calling of genotypes necessary to directly count the proportion of heterozygous sites. Here, we present a method for estimating an individual's genome-wide rate of heterozygosity from low-coverage sequence data, without an intermediate step calling genotypes. Our method jointly learns the shared allele distribution between the individual and a panel of other individuals, together with the sequencing error distributions and the reference bias. We show our method works well, first by its performance on simulated sequence data, and secondly on real sequence data where we obtain estimates using low coverage data consistent with those from higher coverage. We apply our method to obtain estimates of the rate of heterozygosity for 11 humans from diverse world-wide populations, and through this analysis reveal the complex dependency of local sequencing coverage on the true underlying heterozygosity, which complicates the estimation of heterozygosity from sequence data. We show filters can correct for the confounding by sequencing depth. We find in practice that ratios of heterozygosity are more interpretable than absolute estimates, and show that we obtain excellent conformity of ratios of heterozygosity with previous estimates from higher coverage data.
[ { "created": "Mon, 17 Dec 2012 20:22:05 GMT", "version": "v1" } ]
2012-12-18
[ [ "Bryc", "Katarzyna", "" ], [ "Patterson", "Nick", "" ], [ "Reich", "David", "" ] ]
High-throughput shotgun sequence data makes it possible in principle to accurately estimate population genetic parameters without confounding by SNP ascertainment bias. One such statistic of interest is the proportion of heterozygous sites within an individual's genome, which is informative about inbreeding and effective population size. However, in many cases, the available sequence data of an individual is limited to low coverage, preventing the confident calling of genotypes necessary to directly count the proportion of heterozygous sites. Here, we present a method for estimating an individual's genome-wide rate of heterozygosity from low-coverage sequence data, without an intermediate step calling genotypes. Our method jointly learns the shared allele distribution between the individual and a panel of other individuals, together with the sequencing error distributions and the reference bias. We show our method works well, first by its performance on simulated sequence data, and secondly on real sequence data where we obtain estimates using low coverage data consistent with those from higher coverage. We apply our method to obtain estimates of the rate of heterozygosity for 11 humans from diverse world-wide populations, and through this analysis reveal the complex dependency of local sequencing coverage on the true underlying heterozygosity, which complicates the estimation of heterozygosity from sequence data. We show filters can correct for the confounding by sequencing depth. We find in practice that ratios of heterozygosity are more interpretable than absolute estimates, and show that we obtain excellent conformity of ratios of heterozygosity with previous estimates from higher coverage data.
0704.3715
Pablo Echenique
Pablo Echenique, J. L. Alonso
Efficient model chemistries for peptides. I. Split-valence Gaussian basis sets and the heterolevel approximation in RHF and MP2
54 pages, 16 figures, LaTeX, AMSTeX, Submitted to J. Comp. Chem
J. Comp. Chem. (2008) 1408-1422
10.1002/jcc.20900
null
q-bio.QM cond-mat.soft q-bio.BM
null
We present an exhaustive study of more than 250 ab initio potential energy surfaces (PESs) of the model dipeptide HCO-L-Ala-NH2. The model chemistries (MCs) used are constructed as homo- and heterolevels involving possibly different RHF and MP2 calculations for the geometry and the energy. The basis sets used belong to a sample of 39 selected representants from Pople's split-valence families, ranging from the small 3-21G to the large 6-311++G(2df,2pd). The reference PES to which the rest are compared is the MP2/6-311++G(2df,2pd) homolevel, which, as far as we are aware, is the more accurate PES of a dipeptide in the literature. The aim of the study presented is twofold: On the one hand, the evaluation of the influence of polarization and diffuse functions in the basis set, distinguishing between those placed at 1st-row atoms and those placed at hydrogens, as well as the effect of different contraction and valence splitting schemes. On the other hand, the investigation of the heterolevel assumption, which is defined here to be that which states that heterolevel MCs are more efficient than homolevel MCs. The heterolevel approximation is very commonly used in the literature, but it is seldom checked. As far as we know, the only tests for peptides or related systems, have been performed using a small number of conformers, and this is the first time that this potentially very economical approximation is tested in full PESs. In order to achieve these goals, all data sets have been compared and analyzed in a way which captures the nearness concept in the space of MCs.
[ { "created": "Fri, 27 Apr 2007 17:26:29 GMT", "version": "v1" }, { "created": "Tue, 26 Jun 2007 08:36:49 GMT", "version": "v2" } ]
2013-06-21
[ [ "Echenique", "Pablo", "" ], [ "Alonso", "J. L.", "" ] ]
We present an exhaustive study of more than 250 ab initio potential energy surfaces (PESs) of the model dipeptide HCO-L-Ala-NH2. The model chemistries (MCs) used are constructed as homo- and heterolevels involving possibly different RHF and MP2 calculations for the geometry and the energy. The basis sets used belong to a sample of 39 selected representants from Pople's split-valence families, ranging from the small 3-21G to the large 6-311++G(2df,2pd). The reference PES to which the rest are compared is the MP2/6-311++G(2df,2pd) homolevel, which, as far as we are aware, is the more accurate PES of a dipeptide in the literature. The aim of the study presented is twofold: On the one hand, the evaluation of the influence of polarization and diffuse functions in the basis set, distinguishing between those placed at 1st-row atoms and those placed at hydrogens, as well as the effect of different contraction and valence splitting schemes. On the other hand, the investigation of the heterolevel assumption, which is defined here to be that which states that heterolevel MCs are more efficient than homolevel MCs. The heterolevel approximation is very commonly used in the literature, but it is seldom checked. As far as we know, the only tests for peptides or related systems, have been performed using a small number of conformers, and this is the first time that this potentially very economical approximation is tested in full PESs. In order to achieve these goals, all data sets have been compared and analyzed in a way which captures the nearness concept in the space of MCs.
1811.00941
Yuri A. Dabaghian
Andrey Babichev, Dmitriy Morozov and Yuri Dabaghian
Replays of spatial memories suppress topological fluctuations in cognitive map
21 pages, 5 figures, 3 supplementary figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The spiking activity of the hippocampal place cells plays a key role in producing and sustaining an internalized representation of the ambient space---a cognitive map. These cells do not only exhibit location-specific spiking during navigation, but also may rapidly replay the navigated routs through endogenous dynamics of the hippocampal network. Physiologically, such reactivations are viewed as manifestations of "memory replays" that help to learn new information and to consolidate previously acquired memories by reinforcing synapses in the parahippocampal networks. Below we propose a computational model of these processes that allows assessing the effect of replays on acquiring a robust topological map of the environment and demonstrate that replays may play a key role in stabilizing the hippocampal representation of space.
[ { "created": "Fri, 2 Nov 2018 15:43:40 GMT", "version": "v1" } ]
2018-11-05
[ [ "Babichev", "Andrey", "" ], [ "Morozov", "Dmitriy", "" ], [ "Dabaghian", "Yuri", "" ] ]
The spiking activity of the hippocampal place cells plays a key role in producing and sustaining an internalized representation of the ambient space---a cognitive map. These cells do not only exhibit location-specific spiking during navigation, but also may rapidly replay the navigated routs through endogenous dynamics of the hippocampal network. Physiologically, such reactivations are viewed as manifestations of "memory replays" that help to learn new information and to consolidate previously acquired memories by reinforcing synapses in the parahippocampal networks. Below we propose a computational model of these processes that allows assessing the effect of replays on acquiring a robust topological map of the environment and demonstrate that replays may play a key role in stabilizing the hippocampal representation of space.
1212.3205
Olivier Rivoire
Olivier Rivoire
Elements of Coevolution in Biological Sequences
null
Phys. Rev. Lett. 110, 178102 (2013)
10.1103/PhysRevLett.110.178102
null
q-bio.BM cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Studies of coevolution of amino acids within and between proteins have revealed two types of coevolving units: coevolving contacts, which are pairs of amino acids distant along the sequence but in contact in the three-dimensional structure, and sectors, which are larger groups of structurally connected amino acids that underlie the biochemical properties of proteins. By reconciling two approaches for analyzing correlations in multiple sequence alignments, we link these two findings together and with coevolving units of intermediate size, called `sectons', which are shown to provide additional information. By extending the analysis to the co-occurrence of orthologous genes in bacterial genomes, we also show that the methods and results are general and relevant beyond protein structures.
[ { "created": "Thu, 13 Dec 2012 16:05:33 GMT", "version": "v1" }, { "created": "Fri, 19 Apr 2013 07:16:05 GMT", "version": "v2" } ]
2015-06-12
[ [ "Rivoire", "Olivier", "" ] ]
Studies of coevolution of amino acids within and between proteins have revealed two types of coevolving units: coevolving contacts, which are pairs of amino acids distant along the sequence but in contact in the three-dimensional structure, and sectors, which are larger groups of structurally connected amino acids that underlie the biochemical properties of proteins. By reconciling two approaches for analyzing correlations in multiple sequence alignments, we link these two findings together and with coevolving units of intermediate size, called `sectons', which are shown to provide additional information. By extending the analysis to the co-occurrence of orthologous genes in bacterial genomes, we also show that the methods and results are general and relevant beyond protein structures.
1205.6194
Patrick Coquillard
Patrick Coquillard (IBSV), Alexandre Muzy (LISA), Francine Diener
Optimal phenotypic plasticity in a stochastic environment minimizes the cost/benefit ratio
null
Ecological Modelling 242 (2012) 28-36
10.1016/j.ecolmodel.2012.05.019
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses the question of optimal phenotypic plasticity as a response to environmental fluctuations while optimizing the cost/benefit ratio, where the cost is energetic expense of plasticity, and benefit is fitness. The dispersion matrix \Sigma of the genes' response (H = ln|\Sigma|) is used: (i) in a numerical model as a metric of the phenotypic variance reduction in the course of fitness optimization, then (ii) in an analytical model, in order to optimize parameters under the constraint of limited energy availability. Results lead to speculate that such optimized organisms should maximize their exergy and thus the direct/indirect work they exert on the habitat. It is shown that the optimal cost/benefit ratio belongs to an interval in which differences between individuals should not substantially modify their fitness. Consequently, even in the case of an ideal population, close to the optimal plasticity, a certain level of genetic diversity should be long conserved, and a part, still to be determined, of intra-populations genetic diversity probably stem from environment fluctuations. Species confronted to monotonous factors should be less plastic than vicariant species experiencing heterogeneous environments. Analogies with the MaxEnt algorithm of E.T. Jaynes (1957) are discussed, leading to the conjecture that this method may be applied even in case of multivariate but non multinormal distributions of the responses.
[ { "created": "Mon, 28 May 2012 19:36:11 GMT", "version": "v1" } ]
2012-11-12
[ [ "Coquillard", "Patrick", "", "IBSV" ], [ "Muzy", "Alexandre", "", "LISA" ], [ "Diener", "Francine", "" ] ]
This paper addresses the question of optimal phenotypic plasticity as a response to environmental fluctuations while optimizing the cost/benefit ratio, where the cost is energetic expense of plasticity, and benefit is fitness. The dispersion matrix \Sigma of the genes' response (H = ln|\Sigma|) is used: (i) in a numerical model as a metric of the phenotypic variance reduction in the course of fitness optimization, then (ii) in an analytical model, in order to optimize parameters under the constraint of limited energy availability. Results lead to speculate that such optimized organisms should maximize their exergy and thus the direct/indirect work they exert on the habitat. It is shown that the optimal cost/benefit ratio belongs to an interval in which differences between individuals should not substantially modify their fitness. Consequently, even in the case of an ideal population, close to the optimal plasticity, a certain level of genetic diversity should be long conserved, and a part, still to be determined, of intra-populations genetic diversity probably stem from environment fluctuations. Species confronted to monotonous factors should be less plastic than vicariant species experiencing heterogeneous environments. Analogies with the MaxEnt algorithm of E.T. Jaynes (1957) are discussed, leading to the conjecture that this method may be applied even in case of multivariate but non multinormal distributions of the responses.
q-bio/0501023
Alan McKane
A. J. McKane and T. J. Newman
Predator-prey cycles from resonant amplification of demographic stochasticity
4 pages, 2 figures
null
10.1103/PhysRevLett.94.218102
null
q-bio.PE
null
In this paper we present the simplest individual level model of predator-prey dynamics and show, via direct calculation, that it exhibits cycling behavior. The deterministic analogue of our model, recovered when the number of individuals is infinitely large, is the Volterra system (with density-dependent prey reproduction) which is well-known to fail to predict cycles. This difference in behavior can be traced to a resonant amplification of demographic fluctuations which disappears only when the number of individuals is strictly infinite. Our results indicate that additional biological mechanisms, such as predator satiation, may not be necessary to explain observed predator-prey cycles in real (finite) populations.
[ { "created": "Sun, 16 Jan 2005 14:53:23 GMT", "version": "v1" } ]
2009-11-11
[ [ "McKane", "A. J.", "" ], [ "Newman", "T. J.", "" ] ]
In this paper we present the simplest individual level model of predator-prey dynamics and show, via direct calculation, that it exhibits cycling behavior. The deterministic analogue of our model, recovered when the number of individuals is infinitely large, is the Volterra system (with density-dependent prey reproduction) which is well-known to fail to predict cycles. This difference in behavior can be traced to a resonant amplification of demographic fluctuations which disappears only when the number of individuals is strictly infinite. Our results indicate that additional biological mechanisms, such as predator satiation, may not be necessary to explain observed predator-prey cycles in real (finite) populations.
1809.04872
Tanja Slotte
Tiina M. Mattila, Benjamin Laenen, Tanja Slotte
Population genomics of transitions to selfing in Brassicaceae model systems
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Many plants harbor complex mechanisms that promote outcrossing and efficient pollen transfer. These include floral adaptations as well as genetic mechanisms, such as molecular self-incompatibility (SI) systems. The maintenance of such systems over long evolutionary timescales suggests that outcrossing is favorable over a broad range of conditions. Conversely, SI has repeatedly been lost, often in association with transitions to self-fertilization (selfing). This transition is favored when the short-term advantages of selfing outweigh the costs, primarily inbreeding depression. The transition to selfing is expected to have major effects on population genetic variation and adaptive potential, as well as on genome evolution. In the Brassicaceae, many studies on the population genetic, gene regulatory and genomic effects of selfing have centered on the model plant Arabidopsis thaliana and the crucifer genus Capsella. The accumulation of population genomics datasets have allowed detailed investigation of where, when and how the transition to selfing occurred. Future studies will take advantage of the development of population genetics theory on the impact of selfing, especially regarding positive selection. Furthermore, investigation of systems including recent transitions to selfing, mixed mating populations and/or multiple independent replicates of the same transition will facilitate dissecting the effects of mating system variation from processes driven by demography.
[ { "created": "Thu, 13 Sep 2018 10:19:05 GMT", "version": "v1" }, { "created": "Mon, 7 Jan 2019 14:18:07 GMT", "version": "v2" } ]
2019-01-08
[ [ "Mattila", "Tiina M.", "" ], [ "Laenen", "Benjamin", "" ], [ "Slotte", "Tanja", "" ] ]
Many plants harbor complex mechanisms that promote outcrossing and efficient pollen transfer. These include floral adaptations as well as genetic mechanisms, such as molecular self-incompatibility (SI) systems. The maintenance of such systems over long evolutionary timescales suggests that outcrossing is favorable over a broad range of conditions. Conversely, SI has repeatedly been lost, often in association with transitions to self-fertilization (selfing). This transition is favored when the short-term advantages of selfing outweigh the costs, primarily inbreeding depression. The transition to selfing is expected to have major effects on population genetic variation and adaptive potential, as well as on genome evolution. In the Brassicaceae, many studies on the population genetic, gene regulatory and genomic effects of selfing have centered on the model plant Arabidopsis thaliana and the crucifer genus Capsella. The accumulation of population genomics datasets have allowed detailed investigation of where, when and how the transition to selfing occurred. Future studies will take advantage of the development of population genetics theory on the impact of selfing, especially regarding positive selection. Furthermore, investigation of systems including recent transitions to selfing, mixed mating populations and/or multiple independent replicates of the same transition will facilitate dissecting the effects of mating system variation from processes driven by demography.
2009.08101
Robert Prentner
Robert Prentner
Attracting Sets in Perceptual Networks
null
null
null
null
q-bio.NC cs.NE
http://creativecommons.org/licenses/by/4.0/
This document gives a specification for the model used in [1]. It presents a simple way of optimizing mutual information between some input and the attractors of a (noisy) network, using a genetic algorithm. The nodes of this network are modeled as simplified versions of the structures described in the "interface theory of perception" [2]. Accordingly, the system is referred to as a "perceptual network". The present paper is an edited version of technical parts of [1] and serves as accompanying text for the Python implementation PerceptualNetworks, freely available under [3]. 1. Prentner, R., and Fields, C.. Using AI methods to Evaluate a Minimal Model for Perception. OpenPhilosophy 2019, 2, 503-524. 2. Hoffman, D. D., Prakash, C., and Singh, M.. The Interface Theory of Perception. Psychonomic Bulletin and Review 2015, 22, 1480-1506. 3. Prentner, R.. PerceptualNetworks. https://github.com/RobertPrentner/PerceptualNetworks. (accessed September 17 2020)
[ { "created": "Thu, 17 Sep 2020 06:46:44 GMT", "version": "v1" } ]
2020-09-18
[ [ "Prentner", "Robert", "" ] ]
This document gives a specification for the model used in [1]. It presents a simple way of optimizing mutual information between some input and the attractors of a (noisy) network, using a genetic algorithm. The nodes of this network are modeled as simplified versions of the structures described in the "interface theory of perception" [2]. Accordingly, the system is referred to as a "perceptual network". The present paper is an edited version of technical parts of [1] and serves as accompanying text for the Python implementation PerceptualNetworks, freely available under [3]. 1. Prentner, R., and Fields, C.. Using AI methods to Evaluate a Minimal Model for Perception. OpenPhilosophy 2019, 2, 503-524. 2. Hoffman, D. D., Prakash, C., and Singh, M.. The Interface Theory of Perception. Psychonomic Bulletin and Review 2015, 22, 1480-1506. 3. Prentner, R.. PerceptualNetworks. https://github.com/RobertPrentner/PerceptualNetworks. (accessed September 17 2020)
2202.13004
Michael Kochen
Michael A. Kochen, H. Steven Wiley, Song Feng, Herbert M. Sauro
SBbadger: Biochemical Reaction Networks with Definable Degree Distributions
null
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by-nc-sa/4.0/
Motivation: An essential step in developing computational tools for the inference, optimization, and simulation of biochemical reaction networks is gauging tool performance against earlier efforts using an appropriate set of benchmarks. General strategies for the assembly of benchmark models include collection from the literature, creation via subnetwork extraction and de novo generation. However, with respect to biochemical reaction networks, these approaches and their associated tools are either poorly suited to generate models that reflect the wide range of properties found in natural biochemical networks or to do so in numbers that enable rigorous statistical analysis. Results: In this work we present SBbadger, a python-based software tool for the generation of synthetic biochemical reaction or metabolic networks with user-defined degree distributions, multiple available kinetic formalisms, and a host of other definable properties. SBbadger thus enables the creation of benchmark model sets that reflect properties of biological systems and generate the kinetics and model structures typically targeted by computational analysis and inference software. Here we detail the computational and algorithmic workflow of SBbadger, demonstrate its performance under various settings, provide samples outputs, and compare it to currently available biochemical reaction network generation software.
[ { "created": "Fri, 25 Feb 2022 22:32:17 GMT", "version": "v1" }, { "created": "Sun, 24 Jul 2022 20:23:21 GMT", "version": "v2" }, { "created": "Sun, 28 Aug 2022 23:57:05 GMT", "version": "v3" }, { "created": "Mon, 12 Sep 2022 19:03:14 GMT", "version": "v4" } ]
2022-09-14
[ [ "Kochen", "Michael A.", "" ], [ "Wiley", "H. Steven", "" ], [ "Feng", "Song", "" ], [ "Sauro", "Herbert M.", "" ] ]
Motivation: An essential step in developing computational tools for the inference, optimization, and simulation of biochemical reaction networks is gauging tool performance against earlier efforts using an appropriate set of benchmarks. General strategies for the assembly of benchmark models include collection from the literature, creation via subnetwork extraction and de novo generation. However, with respect to biochemical reaction networks, these approaches and their associated tools are either poorly suited to generate models that reflect the wide range of properties found in natural biochemical networks or to do so in numbers that enable rigorous statistical analysis. Results: In this work we present SBbadger, a python-based software tool for the generation of synthetic biochemical reaction or metabolic networks with user-defined degree distributions, multiple available kinetic formalisms, and a host of other definable properties. SBbadger thus enables the creation of benchmark model sets that reflect properties of biological systems and generate the kinetics and model structures typically targeted by computational analysis and inference software. Here we detail the computational and algorithmic workflow of SBbadger, demonstrate its performance under various settings, provide samples outputs, and compare it to currently available biochemical reaction network generation software.
1602.05832
Karolis Uziela
Karolis Uziela, Bj\"orn Wallner, Arne Elofsson
ProQ3: Improved model quality assessments using Rosetta energy terms
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: To assess the quality of a protein model, i.e. to estimate how close it is to its native structure, using no other information than the structure of the model has been shown to be useful for structure prediction. The state of the art method, ProQ2, is based on a machine learning approach that uses a number of features calculated from a protein model. Here, we examine if these features can be exchanged with energy terms calculated from Rosetta and if a combination of these terms can improve the quality assessment. Results: When using the full atom energy function from Rosetta in ProQRosFA the QA is on par with our previous state-of-the-art method, ProQ2. The method based on the low-resolution centroid scoring function, ProQRosCen, performs almost as well and the combination of all the three methods, ProQ2, ProQRosFA and ProQCenFA into ProQ3 show superior performance over ProQ2. Availability: ProQ3 is freely available on BitBucket at https://bitbucket.org/ElofssonLab/proq3
[ { "created": "Mon, 25 Jan 2016 14:32:48 GMT", "version": "v1" } ]
2016-02-19
[ [ "Uziela", "Karolis", "" ], [ "Wallner", "Björn", "" ], [ "Elofsson", "Arne", "" ] ]
Motivation: To assess the quality of a protein model, i.e. to estimate how close it is to its native structure, using no other information than the structure of the model has been shown to be useful for structure prediction. The state of the art method, ProQ2, is based on a machine learning approach that uses a number of features calculated from a protein model. Here, we examine if these features can be exchanged with energy terms calculated from Rosetta and if a combination of these terms can improve the quality assessment. Results: When using the full atom energy function from Rosetta in ProQRosFA the QA is on par with our previous state-of-the-art method, ProQ2. The method based on the low-resolution centroid scoring function, ProQRosCen, performs almost as well and the combination of all the three methods, ProQ2, ProQRosFA and ProQCenFA into ProQ3 show superior performance over ProQ2. Availability: ProQ3 is freely available on BitBucket at https://bitbucket.org/ElofssonLab/proq3
1908.03264
Ruben Sanchez-Romero
Ruben Sanchez-Romero, Joseph D. Ramsey, Kun Zhang, Clark Glymour
Identification of Effective Connectivity Subregions
null
null
null
null
q-bio.NC cs.LG eess.IV q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Standard fMRI connectivity analyses depend on aggregating the time series of individual voxels within regions of interest (ROIs). In certain cases, this spatial aggregation implies a loss of valuable functional and anatomical information about smaller subsets of voxels that drive the ROI level connectivity. We use two recently published graphical search methods to identify subsets of voxels that are highly responsible for the connectivity between larger ROIs. To illustrate the procedure, we apply both methods to longitudinal high-resolution resting state fMRI data from regions in the medial temporal lobe from a single individual. Both methods recovered similar subsets of voxels within larger ROIs of entorhinal cortex and hippocampus subfields that also show spatial consistency across different scanning sessions and across hemispheres. In contrast to standard functional connectivity methods, both algorithms applied here are robust against false positive connections produced by common causes and indirect paths (in contrast to Pearson's correlation) and common effect conditioning (in contrast to partial correlation based approaches). These algorithms allow for identification of subregions of voxels driving the connectivity between regions of interest, recovering valuable anatomical and functional information that is lost when ROIs are aggregated. Both methods are specially suited for voxelwise connectivity research, given their running times and scalability to big data problems.
[ { "created": "Thu, 8 Aug 2019 20:43:22 GMT", "version": "v1" } ]
2019-08-12
[ [ "Sanchez-Romero", "Ruben", "" ], [ "Ramsey", "Joseph D.", "" ], [ "Zhang", "Kun", "" ], [ "Glymour", "Clark", "" ] ]
Standard fMRI connectivity analyses depend on aggregating the time series of individual voxels within regions of interest (ROIs). In certain cases, this spatial aggregation implies a loss of valuable functional and anatomical information about smaller subsets of voxels that drive the ROI level connectivity. We use two recently published graphical search methods to identify subsets of voxels that are highly responsible for the connectivity between larger ROIs. To illustrate the procedure, we apply both methods to longitudinal high-resolution resting state fMRI data from regions in the medial temporal lobe from a single individual. Both methods recovered similar subsets of voxels within larger ROIs of entorhinal cortex and hippocampus subfields that also show spatial consistency across different scanning sessions and across hemispheres. In contrast to standard functional connectivity methods, both algorithms applied here are robust against false positive connections produced by common causes and indirect paths (in contrast to Pearson's correlation) and common effect conditioning (in contrast to partial correlation based approaches). These algorithms allow for identification of subregions of voxels driving the connectivity between regions of interest, recovering valuable anatomical and functional information that is lost when ROIs are aggregated. Both methods are specially suited for voxelwise connectivity research, given their running times and scalability to big data problems.
2301.09570
Ahmed Ayman
Ahmed Ayman - Mohamed Sabry
A Novel Power-optimized CMOS sEMG Device with Ultra Low-noise integrated with ConvNet (VGG16) for Biomedical Applications
null
null
10.13140/RG.2.2.28054.63044
null
q-bio.NC
http://creativecommons.org/licenses/by-sa/4.0/
The needle bio-potential sensors for measuring muscle and brain activity need invasive surgical targeted muscle reinnervation (TMR) and a demanding process to maintain, but surface bio-potential sensors lack clear bio-signal reading (Signal-Interference). In this research, a novel power-optimized complementary metal-oxide-semiconductor (CMOS) Surface Electromyography (sEMG) is developed to improve the efficiency and quality of captured bio-signal for biomedical application: The early diagnosis of neurological disorders (Dystonia) and a novel compatible mind-controlled prosthetic leg with human daily activities. A novel sEMG composed of CMOS Op-Amp based PIC16F877A 8-bit CMOS Flash-based Microcontroller is utilized to minimize power consumption and data processing time. sEMG Circuit is implemented with developed analog filter along with infinite impulse response (IIR) digital filter via Fast Fourier Transform (FFT), Z-transform, and difference equations. The analysis shows a significant improvement of 169.2% noise-reduction in recorded EMG signal using developed digital filter compared to analog one according to numerical root mean square error (RMSE). Moreover, digital IIR was tested in two stages: algorithmic and real-world. As a result, IIR's algorithmic (MATLAB) and real-world RMSEs were 0.03616 and 0.05224, respectively. A notable advancement of 20.8% in data processing duration in EMG signal analysis. Optimizing VGG, AlexNet, and ResNet ConvNet as trained and tested on 15 public EEG (62-electrode) and 18 subjects' observed EMG data. The results indicate that VGG16-1D is 98.43% higher. During real testing, the accuracy was 95.8 +/- 4.6% for 16 subjects (6 Amputees-10 Dystonia). This study demonstrates the potential for sEMG, paving the way for biomedical applications.
[ { "created": "Wed, 4 Jan 2023 01:46:55 GMT", "version": "v1" }, { "created": "Wed, 10 May 2023 14:45:52 GMT", "version": "v2" } ]
2023-05-11
[ [ "Sabry", "Ahmed Ayman - Mohamed", "" ] ]
The needle bio-potential sensors for measuring muscle and brain activity need invasive surgical targeted muscle reinnervation (TMR) and a demanding process to maintain, but surface bio-potential sensors lack clear bio-signal reading (Signal-Interference). In this research, a novel power-optimized complementary metal-oxide-semiconductor (CMOS) Surface Electromyography (sEMG) is developed to improve the efficiency and quality of captured bio-signal for biomedical application: The early diagnosis of neurological disorders (Dystonia) and a novel compatible mind-controlled prosthetic leg with human daily activities. A novel sEMG composed of CMOS Op-Amp based PIC16F877A 8-bit CMOS Flash-based Microcontroller is utilized to minimize power consumption and data processing time. sEMG Circuit is implemented with developed analog filter along with infinite impulse response (IIR) digital filter via Fast Fourier Transform (FFT), Z-transform, and difference equations. The analysis shows a significant improvement of 169.2% noise-reduction in recorded EMG signal using developed digital filter compared to analog one according to numerical root mean square error (RMSE). Moreover, digital IIR was tested in two stages: algorithmic and real-world. As a result, IIR's algorithmic (MATLAB) and real-world RMSEs were 0.03616 and 0.05224, respectively. A notable advancement of 20.8% in data processing duration in EMG signal analysis. Optimizing VGG, AlexNet, and ResNet ConvNet as trained and tested on 15 public EEG (62-electrode) and 18 subjects' observed EMG data. The results indicate that VGG16-1D is 98.43% higher. During real testing, the accuracy was 95.8 +/- 4.6% for 16 subjects (6 Amputees-10 Dystonia). This study demonstrates the potential for sEMG, paving the way for biomedical applications.
1308.0668
David Lusseau
David Lusseau
Quantum-like perception entanglement leads to advantageous collective decisions
8 pages, 3 figures, paper presented at the XXXIII International Ethological Congress, 8 August 2013, Newcastle-upon-Tyne, UK
null
null
null
q-bio.PE q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Social animals have to make collective decisions on a daily basis. In most instances, these decisions are taken by consensus, when the group does what the majority of individuals want. Individuals have to base these decisions on the information they perceive from their socioecological landscape. The perception mechanisms they use can influence the cost of collective decisions. Here I show that when group-living individuals perceive their environment concurrently for the same decisions, a quantum-like perception entanglement process can confer less costly collective decisions than when individuals collect their information independently. This highlights a mechanism that can help explain what may seem to be irrational group-living behavior and opens avenues to develop empirical tests for quantum decision theory.
[ { "created": "Sat, 3 Aug 2013 07:54:08 GMT", "version": "v1" } ]
2013-08-06
[ [ "Lusseau", "David", "" ] ]
Social animals have to make collective decisions on a daily basis. In most instances, these decisions are taken by consensus, when the group does what the majority of individuals want. Individuals have to base these decisions on the information they perceive from their socioecological landscape. The perception mechanisms they use can influence the cost of collective decisions. Here I show that when group-living individuals perceive their environment concurrently for the same decisions, a quantum-like perception entanglement process can confer less costly collective decisions than when individuals collect their information independently. This highlights a mechanism that can help explain what may seem to be irrational group-living behavior and opens avenues to develop empirical tests for quantum decision theory.
1902.05621
Saeed Ranjbar
Ali Esmaeili and Saeed Ranjbar
A Revision of the Bernoulli Equation as a Controller of the Fick's Diffusion Equation in Drug Delivery Modeling
5 pages, 1 figure
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mathematical equations can be used as effectual tools in drug delivery systems modeling and are also highly helpful to have a theoretical understanding of controlled drug release and diffusion mechanisms. In this study we aim to present a mathematical combination between the Bernoulli equation and the Fick's equation as a diffusion controller in drug delivery systems. For this propose we have revised the Bernoulli equation as an additional, controller and complementary method of the Fick's diffusion equation to detect the optimal delivery direction to control the diffusion divergence of the drug carrier in vascular systems during the transportation process in biological tissues. Therefore, by utilizing the Bernoulli equation we could determine the real direction by the route function f.
[ { "created": "Wed, 30 Jan 2019 13:49:21 GMT", "version": "v1" } ]
2019-02-18
[ [ "Esmaeili", "Ali", "" ], [ "Ranjbar", "Saeed", "" ] ]
Mathematical equations can be used as effectual tools in drug delivery systems modeling and are also highly helpful to have a theoretical understanding of controlled drug release and diffusion mechanisms. In this study we aim to present a mathematical combination between the Bernoulli equation and the Fick's equation as a diffusion controller in drug delivery systems. For this propose we have revised the Bernoulli equation as an additional, controller and complementary method of the Fick's diffusion equation to detect the optimal delivery direction to control the diffusion divergence of the drug carrier in vascular systems during the transportation process in biological tissues. Therefore, by utilizing the Bernoulli equation we could determine the real direction by the route function f.
q-bio/0311012
Pau Fern\'andez
Pau Fernandez and Ricard V. Sole
The Role of Computation in Complex Regulatory Networks
to appear in "Scale-free Networks and Genome Biology", E. Koonin et al. (eds.), Landes Bioscience (2003)
null
null
null
q-bio.MN q-bio.GN q-bio.PE
null
Biological phenomena differ significantly from physical phenomena. At the heart of this distinction is the fact that biological entities have computational abilities and thus they are inherently difficult to predict. This is the reason why simplified models that provide the minimal requirements for computation turn out to be very useful to study networks of many components. In this chapter, we briefly review the dynamical aspects of models of regulatory networks, discussing their most salient features, and we also show how these models can give clues about the way in which networks may organize their capacity to evolve, by providing simple examples of the implementation of robustness and modularity.
[ { "created": "Mon, 10 Nov 2003 13:22:35 GMT", "version": "v1" } ]
2009-09-29
[ [ "Fernandez", "Pau", "" ], [ "Sole", "Ricard V.", "" ] ]
Biological phenomena differ significantly from physical phenomena. At the heart of this distinction is the fact that biological entities have computational abilities and thus they are inherently difficult to predict. This is the reason why simplified models that provide the minimal requirements for computation turn out to be very useful to study networks of many components. In this chapter, we briefly review the dynamical aspects of models of regulatory networks, discussing their most salient features, and we also show how these models can give clues about the way in which networks may organize their capacity to evolve, by providing simple examples of the implementation of robustness and modularity.
1011.2071
Steven Duplij
Diana Duplij (Institute of Molecular Biology and Genetics, Kiev, Ukraine)
Comparative analysis of the nucleotide composition biases in exons and introns of human genes
8 pages, 2 figures, 1 table
null
null
null
q-bio.QM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The nucleotide composition of human genes with a special emphasis on transcription-related strand asymmetries is analyzed. Such asymmetries may be associated with different mutational rates in two principal factors. The first one is transcription-coupled repair and the second one is the selective pressure related to optimization of the translation efficiency. The former factor affects both coding and noncoding regions of a gene, while the latter factor is applicable only to the coding regions. Compositional asymmetries calculated at the third position of a codon in coding (exons) and noncoding (introns, UTR, upstream and downstream) regions of human genes are compared. It is shown that the keto-skew (excess of the frequencies of G and T nucleotides over the frequencies of A and C nucleotides in the same strand) is most pronounced in intronic regions, less pronounced in coding regions, and has near zero values in untranscribed regions. The keto-skew correlates with the level of gene expression in germ-line cells in both introns and exons. We propose to use the results of our analysis to estimate the contribution of different evolutionary factors to the transcription-related compositional biases.
[ { "created": "Tue, 9 Nov 2010 13:11:31 GMT", "version": "v1" } ]
2010-11-10
[ [ "Duplij", "Diana", "", "Institute of Molecular Biology and Genetics, Kiev,\n Ukraine" ] ]
The nucleotide composition of human genes with a special emphasis on transcription-related strand asymmetries is analyzed. Such asymmetries may be associated with different mutational rates in two principal factors. The first one is transcription-coupled repair and the second one is the selective pressure related to optimization of the translation efficiency. The former factor affects both coding and noncoding regions of a gene, while the latter factor is applicable only to the coding regions. Compositional asymmetries calculated at the third position of a codon in coding (exons) and noncoding (introns, UTR, upstream and downstream) regions of human genes are compared. It is shown that the keto-skew (excess of the frequencies of G and T nucleotides over the frequencies of A and C nucleotides in the same strand) is most pronounced in intronic regions, less pronounced in coding regions, and has near zero values in untranscribed regions. The keto-skew correlates with the level of gene expression in germ-line cells in both introns and exons. We propose to use the results of our analysis to estimate the contribution of different evolutionary factors to the transcription-related compositional biases.
1609.08611
Jimmy Garnier
Jimmy Garnier (CNRS, USMB, Universit\'e de Savoie, Universit\'e de Chamb\'ery), Mark Lewis
Expansion under climate change: the genetic consequences
null
Bulletin of Mathematical Biology, Springer Verlag, 2016
null
null
q-bio.PE math.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Range expansion and range shifts are crucial population responses to climate change. Genetic consequences are not well understood but are clearly coupled to ecological dynamics that, in turn, are driven by shifting climate conditions. We model a population with a deterministic reaction-- diffusion model coupled to a heterogeneous environment that develops in time due to climate change. We decompose the resulting travelling wave solution into neutral genetic components to analyse the spatio-temporal dynamics of its genetic structure. Our analysis shows that range expansions and range shifts under slow climate change preserve genetic diversity. This is because slow climate change creates range boundaries that promote spatial mixing of genetic components. Mathematically , the mixing leads to so-called pushed travelling wave solutions. This mixing phenomenon is not seen in spatially homogeneous environments, where range expansion reduces genetic diversity through gene surfing arising from pulled travelling wave solutions. However, the preservation of diversity is diminished when climate change occurs too quickly. Using diversity indices, we show that fast expansions and range shifts erode genetic diversity more than slow range expansions and range shifts. Our study provides analytical insight into the dynamics of travelling wave solutions in heterogeneous environments.
[ { "created": "Tue, 27 Sep 2016 14:39:42 GMT", "version": "v1" } ]
2016-09-29
[ [ "Garnier", "Jimmy", "", "CNRS, USMB, Université de Savoie, Université de\n Chambéry" ], [ "Lewis", "Mark", "" ] ]
Range expansion and range shifts are crucial population responses to climate change. Genetic consequences are not well understood but are clearly coupled to ecological dynamics that, in turn, are driven by shifting climate conditions. We model a population with a deterministic reaction-- diffusion model coupled to a heterogeneous environment that develops in time due to climate change. We decompose the resulting travelling wave solution into neutral genetic components to analyse the spatio-temporal dynamics of its genetic structure. Our analysis shows that range expansions and range shifts under slow climate change preserve genetic diversity. This is because slow climate change creates range boundaries that promote spatial mixing of genetic components. Mathematically , the mixing leads to so-called pushed travelling wave solutions. This mixing phenomenon is not seen in spatially homogeneous environments, where range expansion reduces genetic diversity through gene surfing arising from pulled travelling wave solutions. However, the preservation of diversity is diminished when climate change occurs too quickly. Using diversity indices, we show that fast expansions and range shifts erode genetic diversity more than slow range expansions and range shifts. Our study provides analytical insight into the dynamics of travelling wave solutions in heterogeneous environments.
1712.00351
Wilfred Ndifon
A. M. Degoot, Faraimunashe Chirove, and Wilfred Ndifon
Trans-allelic model for prediction of peptide:MHC-II interactions
null
null
null
null
q-bio.QM q-bio.BM q-bio.GN stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Major histocompatibility complex class two (MHC-II) molecules are trans-membrane proteins and key components of the cellular immune system. Upon recognition of foreign peptides expressed on the MHC-II binding groove, helper T cells mount an immune response against invading pathogens. Therefore, mechanistic identification and knowledge of physico-chemical features that govern interactions between peptides and MHC-II molecules is useful for the design of effective epitope-based vaccines, as well as for understanding of immune responses. In this paper, we present a comprehensive trans-allelic prediction model, a generalized version of our previous biophysical model, that can predict peptide interactions for all three human MHC-II loci (HLA-DR, HLA-DP and HLA-DQ), using both peptide sequence data and structural information of MHC-II molecules. The advantage of this approach over other machine learning models is that it offers a simple and plausible physical explanation for peptide-MHC-II interactions. We train the model using a benchmark experimental dataset, and measure its predictive performance using novel data. Despite its relative simplicity, we find that the model has comparable performance to the state-of-the-art method. Focusing on the physical bases of peptide-MHC binding, we find support for previous theoretical predictions about the contributions of certain binding pockets to the binding energy. Additionally, we find that binding pockets P 4 and P 5 of HLA-DP, which were not previously considered as primary anchors, do make strong contributions to the binding energy. Together, the results indicate that our model can serve as a useful complement to alternative approaches to predicting peptide-MHC interactions.
[ { "created": "Fri, 1 Dec 2017 15:03:40 GMT", "version": "v1" } ]
2017-12-05
[ [ "Degoot", "A. M.", "" ], [ "Chirove", "Faraimunashe", "" ], [ "Ndifon", "Wilfred", "" ] ]
Major histocompatibility complex class two (MHC-II) molecules are trans-membrane proteins and key components of the cellular immune system. Upon recognition of foreign peptides expressed on the MHC-II binding groove, helper T cells mount an immune response against invading pathogens. Therefore, mechanistic identification and knowledge of physico-chemical features that govern interactions between peptides and MHC-II molecules is useful for the design of effective epitope-based vaccines, as well as for understanding of immune responses. In this paper, we present a comprehensive trans-allelic prediction model, a generalized version of our previous biophysical model, that can predict peptide interactions for all three human MHC-II loci (HLA-DR, HLA-DP and HLA-DQ), using both peptide sequence data and structural information of MHC-II molecules. The advantage of this approach over other machine learning models is that it offers a simple and plausible physical explanation for peptide-MHC-II interactions. We train the model using a benchmark experimental dataset, and measure its predictive performance using novel data. Despite its relative simplicity, we find that the model has comparable performance to the state-of-the-art method. Focusing on the physical bases of peptide-MHC binding, we find support for previous theoretical predictions about the contributions of certain binding pockets to the binding energy. Additionally, we find that binding pockets P 4 and P 5 of HLA-DP, which were not previously considered as primary anchors, do make strong contributions to the binding energy. Together, the results indicate that our model can serve as a useful complement to alternative approaches to predicting peptide-MHC interactions.
2306.10168
Mitchell Ostrow
Mitchell Ostrow, Adam Eisen, Leo Kozachkov, Ila Fiete
Beyond Geometry: Comparing the Temporal Structure of Computation in Neural Circuits with Dynamical Similarity Analysis
22 pages, 9 figures
null
null
null
q-bio.NC cs.LG cs.NE q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
How can we tell whether two neural networks utilize the same internal processes for a particular computation? This question is pertinent for multiple subfields of neuroscience and machine learning, including neuroAI, mechanistic interpretability, and brain-machine interfaces. Standard approaches for comparing neural networks focus on the spatial geometry of latent states. Yet in recurrent networks, computations are implemented at the level of dynamics, and two networks performing the same computation with equivalent dynamics need not exhibit the same geometry. To bridge this gap, we introduce a novel similarity metric that compares two systems at the level of their dynamics, called Dynamical Similarity Analysis (DSA). Our method incorporates two components: Using recent advances in data-driven dynamical systems theory, we learn a high-dimensional linear system that accurately captures core features of the original nonlinear dynamics. Next, we compare different systems passed through this embedding using a novel extension of Procrustes Analysis that accounts for how vector fields change under orthogonal transformation. In four case studies, we demonstrate that our method disentangles conjugate and non-conjugate recurrent neural networks (RNNs), while geometric methods fall short. We additionally show that our method can distinguish learning rules in an unsupervised manner. Our method opens the door to comparative analyses of the essential temporal structure of computation in neural circuits.
[ { "created": "Fri, 16 Jun 2023 20:11:38 GMT", "version": "v1" }, { "created": "Tue, 22 Aug 2023 15:57:35 GMT", "version": "v2" }, { "created": "Sun, 29 Oct 2023 18:13:46 GMT", "version": "v3" } ]
2023-10-31
[ [ "Ostrow", "Mitchell", "" ], [ "Eisen", "Adam", "" ], [ "Kozachkov", "Leo", "" ], [ "Fiete", "Ila", "" ] ]
How can we tell whether two neural networks utilize the same internal processes for a particular computation? This question is pertinent for multiple subfields of neuroscience and machine learning, including neuroAI, mechanistic interpretability, and brain-machine interfaces. Standard approaches for comparing neural networks focus on the spatial geometry of latent states. Yet in recurrent networks, computations are implemented at the level of dynamics, and two networks performing the same computation with equivalent dynamics need not exhibit the same geometry. To bridge this gap, we introduce a novel similarity metric that compares two systems at the level of their dynamics, called Dynamical Similarity Analysis (DSA). Our method incorporates two components: Using recent advances in data-driven dynamical systems theory, we learn a high-dimensional linear system that accurately captures core features of the original nonlinear dynamics. Next, we compare different systems passed through this embedding using a novel extension of Procrustes Analysis that accounts for how vector fields change under orthogonal transformation. In four case studies, we demonstrate that our method disentangles conjugate and non-conjugate recurrent neural networks (RNNs), while geometric methods fall short. We additionally show that our method can distinguish learning rules in an unsupervised manner. Our method opens the door to comparative analyses of the essential temporal structure of computation in neural circuits.
1606.07497
Keith Burghardt
Keith Burghardt, Christopher Verzijl, Junming Huang, Matthew Ingram, Binyang Song, Marie-Pierre Hasne
Testing Modeling Assumptions in the West Africa Ebola Outbreak
16 pages, 14 figures
Sci. Rep., 6: 34598 (2016)
10.1038/srep34598
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Ebola virus in West Africa has infected almost 30,000 and killed over 11,000 people. Recent models of Ebola Virus Disease (EVD) have often made assumptions about how the disease spreads, such as uniform transmissibility and homogeneous mixing within a population. In this paper, we test whether these assumptions are necessarily correct, and offer simple solutions that may improve disease model accuracy. First, we use data and models of West African migration to show that EVD does not homogeneously mix, but spreads in a predictable manner. Next, we estimate the initial growth rate of EVD within country administrative divisions and find that it significantly decreases with population density. Finally, we test whether EVD strains have uniform transmissibility through a novel statistical test, and find that certain strains appear more often than expected by chance.
[ { "created": "Thu, 23 Jun 2016 22:09:37 GMT", "version": "v1" }, { "created": "Wed, 12 Oct 2016 18:12:43 GMT", "version": "v2" } ]
2016-10-13
[ [ "Burghardt", "Keith", "" ], [ "Verzijl", "Christopher", "" ], [ "Huang", "Junming", "" ], [ "Ingram", "Matthew", "" ], [ "Song", "Binyang", "" ], [ "Hasne", "Marie-Pierre", "" ] ]
The Ebola virus in West Africa has infected almost 30,000 and killed over 11,000 people. Recent models of Ebola Virus Disease (EVD) have often made assumptions about how the disease spreads, such as uniform transmissibility and homogeneous mixing within a population. In this paper, we test whether these assumptions are necessarily correct, and offer simple solutions that may improve disease model accuracy. First, we use data and models of West African migration to show that EVD does not homogeneously mix, but spreads in a predictable manner. Next, we estimate the initial growth rate of EVD within country administrative divisions and find that it significantly decreases with population density. Finally, we test whether EVD strains have uniform transmissibility through a novel statistical test, and find that certain strains appear more often than expected by chance.
1712.10280
Hongbo Jia
Hongbo Jia
First Draft on the xInf Model for Universal Physical Computation and Reverse Engineering of Natural Intelligence
32 pages, 4 figures
null
null
null
q-bio.NC cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Turing Machines are universal computing machines in theory. It has been a long debate whether Turing Machines can simulate the consciousness mind behaviors in the materialistic universe. Three different hypotheses come out of such debate, in short:(A) Can; (B) Cannot; (C) Super-Turing machines can. Because Turing Machines or other kinds of theoretical computing models are abstract objects while behaviors are real observables, this debate involves at least three distinct fields of science and technology: physics, computer engineering, and experimental neuroscience. However, the languages used in these different fields are highly heterogeneous and not easily interpretable for each other, making it very difficult to reach partial agreements regarding this debate, Therefore, the main goal of this manuscript is to establish a proper language that can translate among those different fields. First, I propose a theoretical model for analyzing how theoretical computing machines would physically run in physical time. This model, termed as the xInf, is at first place Turing-complete in theory, and depending on the properties of physical time, it can be either Turing-equivalent or Super-Turing in the physical universe. The xInf Model is demonstrated to be a suitable universal language to translate among physics, computer engineering, and neuroscience. Finally, I propose a conjecture that there exists a Minimal Complete Set of rules in the xInf Model that enables the construction of a physical machine using inorganic materials that can pass the Turing Test in physical time. I cannot demonstrate whether such a conjecture to be testified or falsified on paper using finite-order logic, my only solution is physical time itself, i.e. an evolutionary competition will eventually tell the conclusion.
[ { "created": "Tue, 26 Dec 2017 23:36:43 GMT", "version": "v1" } ]
2018-01-01
[ [ "Jia", "Hongbo", "" ] ]
Turing Machines are universal computing machines in theory. It has been a long debate whether Turing Machines can simulate the consciousness mind behaviors in the materialistic universe. Three different hypotheses come out of such debate, in short:(A) Can; (B) Cannot; (C) Super-Turing machines can. Because Turing Machines or other kinds of theoretical computing models are abstract objects while behaviors are real observables, this debate involves at least three distinct fields of science and technology: physics, computer engineering, and experimental neuroscience. However, the languages used in these different fields are highly heterogeneous and not easily interpretable for each other, making it very difficult to reach partial agreements regarding this debate, Therefore, the main goal of this manuscript is to establish a proper language that can translate among those different fields. First, I propose a theoretical model for analyzing how theoretical computing machines would physically run in physical time. This model, termed as the xInf, is at first place Turing-complete in theory, and depending on the properties of physical time, it can be either Turing-equivalent or Super-Turing in the physical universe. The xInf Model is demonstrated to be a suitable universal language to translate among physics, computer engineering, and neuroscience. Finally, I propose a conjecture that there exists a Minimal Complete Set of rules in the xInf Model that enables the construction of a physical machine using inorganic materials that can pass the Turing Test in physical time. I cannot demonstrate whether such a conjecture to be testified or falsified on paper using finite-order logic, my only solution is physical time itself, i.e. an evolutionary competition will eventually tell the conclusion.
2011.12832
Ailar Mahdizadeh
Emad Arasteh Emamzadeh-Hashemi, Ailar Mahdizadeh
External Electromagnetic Wave Excitation of a PreSynaptic Neuron Based on LIF model
5pages,4figures,etech2020
null
null
null
q-bio.NC cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interaction of electromagnetic (EM) waves with human tissue has been a longstanding research topic for electrical and biomedical engineers. However, few numbers of publications discuss the impacts of external EM-waves on neural stimulation and communication through the nervous system. In fact, complex biological neural channels are a main barrier for intact and comprehensive analyses in this area. One of the everpresent challenges in neural communication responses is dependency of vesicle release probability on the input spiking pattern. In this regard, this study sheds light on consequences of changing the frequency of external EM-wave excitation on the post-synaptic neuron's spiking rate. It is assumed that the penetration depth of the wave in brain does not cover the postsynaptic neuron. Consequently, we model neurotransmission of a bipartite chemical synapse. In addition, the way that external stimulation affects neurotransmission is examined. Unlike multiple frequency component EM-waves, the monochromatic incident wave does not face frequency shift and distortion in dispersive media. In this manner, a single frequency signal is added as external current in the modified leaky integrated-andfire (LIF) model. The results demonstrate existence of a node equilibrium point in the first order dynamical system of LIF model. A fold bifurcation (for presupposed LIF model values) occurs when the external excitation frequency is near 200 Hz. The outcomes provided in this paper enable us to select proper frequency excitation for neural signaling. Correspondingly, the cut-off frequency reliance on elements' values in LIF circuit is found.
[ { "created": "Wed, 25 Nov 2020 15:40:38 GMT", "version": "v1" } ]
2020-11-26
[ [ "Emamzadeh-Hashemi", "Emad Arasteh", "" ], [ "Mahdizadeh", "Ailar", "" ] ]
Interaction of electromagnetic (EM) waves with human tissue has been a longstanding research topic for electrical and biomedical engineers. However, few numbers of publications discuss the impacts of external EM-waves on neural stimulation and communication through the nervous system. In fact, complex biological neural channels are a main barrier for intact and comprehensive analyses in this area. One of the everpresent challenges in neural communication responses is dependency of vesicle release probability on the input spiking pattern. In this regard, this study sheds light on consequences of changing the frequency of external EM-wave excitation on the post-synaptic neuron's spiking rate. It is assumed that the penetration depth of the wave in brain does not cover the postsynaptic neuron. Consequently, we model neurotransmission of a bipartite chemical synapse. In addition, the way that external stimulation affects neurotransmission is examined. Unlike multiple frequency component EM-waves, the monochromatic incident wave does not face frequency shift and distortion in dispersive media. In this manner, a single frequency signal is added as external current in the modified leaky integrated-andfire (LIF) model. The results demonstrate existence of a node equilibrium point in the first order dynamical system of LIF model. A fold bifurcation (for presupposed LIF model values) occurs when the external excitation frequency is near 200 Hz. The outcomes provided in this paper enable us to select proper frequency excitation for neural signaling. Correspondingly, the cut-off frequency reliance on elements' values in LIF circuit is found.
2004.14467
Nida Obatake
Nida Obatake, Anne Shiu, Dilruba Sofia
Mixed volume of small reaction networks
null
Involve 13 (2020) 845-860
10.2140/involve.2020.13.845
null
q-bio.MN math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An important invariant of a chemical reaction network is its maximum number of positive steady states. This number, however, is in general difficult to compute. Nonetheless, there is an upper bound on this number -- namely, a network's mixed volume -- that is easy to compute. Moreover, recent work has shown that, for certain biological signaling networks, the mixed volume does not greatly exceed the maximum number of positive steady states. Continuing this line of research, we further investigate this overcount and also compute the mixed volumes of small networks, those with only a few species or reactions.
[ { "created": "Wed, 29 Apr 2020 20:45:48 GMT", "version": "v1" } ]
2020-12-09
[ [ "Obatake", "Nida", "" ], [ "Shiu", "Anne", "" ], [ "Sofia", "Dilruba", "" ] ]
An important invariant of a chemical reaction network is its maximum number of positive steady states. This number, however, is in general difficult to compute. Nonetheless, there is an upper bound on this number -- namely, a network's mixed volume -- that is easy to compute. Moreover, recent work has shown that, for certain biological signaling networks, the mixed volume does not greatly exceed the maximum number of positive steady states. Continuing this line of research, we further investigate this overcount and also compute the mixed volumes of small networks, those with only a few species or reactions.
2005.02859
Carles Rovira
Xavier Bardina, Marco Ferrante, Carles Rovira
A stochastic epidemic model of COVID-19 disease
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To model the evolution of diseases with extended latency periods and the presence of asymptomatic patients like COVID-19, we define a simple discrete time stochastic SIR-type epidemic model. We include both latent periods as well as the presence of quarantine areas, to capture the evolutionary dynamics of such diseases.
[ { "created": "Mon, 4 May 2020 14:43:49 GMT", "version": "v1" }, { "created": "Wed, 13 May 2020 09:34:38 GMT", "version": "v2" } ]
2020-05-14
[ [ "Bardina", "Xavier", "" ], [ "Ferrante", "Marco", "" ], [ "Rovira", "Carles", "" ] ]
To model the evolution of diseases with extended latency periods and the presence of asymptomatic patients like COVID-19, we define a simple discrete time stochastic SIR-type epidemic model. We include both latent periods as well as the presence of quarantine areas, to capture the evolutionary dynamics of such diseases.
1703.10713
Jiancheng Zhuang
Jiancheng Zhuang
Detecting Resting-state Neural Connectivity Using Dynamic Network Analysis on Multiband fMRI Data
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes an approach of using dynamic Structural Equation Modeling (SEM) analysis to estimate the connectivity networks from resting-state fMRI data measured by a multiband EPI sequence. Two structural equation models were estimated at each voxel with respect to the sensory-motor network and default-mode network. The resulting connectivity maps indicate that supplementary motor area has significant connections to left/right primary motor areas, and medial prefrontal cortex link significantly with posterior cingulate cortex and inferior parietal lobules. The results imply that high temporal resolution images obtained with multiband fMRI data can provide dynamic and directional information on the neural connectivity.
[ { "created": "Thu, 30 Mar 2017 23:38:20 GMT", "version": "v1" } ]
2017-04-03
[ [ "Zhuang", "Jiancheng", "" ] ]
This paper describes an approach of using dynamic Structural Equation Modeling (SEM) analysis to estimate the connectivity networks from resting-state fMRI data measured by a multiband EPI sequence. Two structural equation models were estimated at each voxel with respect to the sensory-motor network and default-mode network. The resulting connectivity maps indicate that supplementary motor area has significant connections to left/right primary motor areas, and medial prefrontal cortex link significantly with posterior cingulate cortex and inferior parietal lobules. The results imply that high temporal resolution images obtained with multiband fMRI data can provide dynamic and directional information on the neural connectivity.
2309.16513
Yannis Drossinos
Yannis Drossinos and Nikolaos I. Stilianakis
On modeling airborne infection risk
15 pages, 3 figures
R. Soc. Open Sci. 11:231976 (2024)
10.1098/rsos.231976
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Airborne infection risk analysis is usually performed for enclosed spaces where susceptible individuals are exposed to infectious airborne respiratory droplets by inhalation. It is usually based on exponential, dose-response models of which a widely used variant is the Wells-Riley (WR) model. We revisit this infection-risk estimate and extend it to the population level. We use an epidemiological model where the mode of pathogen transmission, either airborne or contact, is explicitly considered. We illustrate the link between epidemiological models and the WR model. We argue that airborne infection quanta are, up to an overall density, airborne infectious respiratory droplets modified by a parameter that depends on biological properties of the pathogen, physical properties of the droplet, and behavioural parameters of the individual. We calculate the time-dependent risk to be infected during the epidemic for two scenarios. We show how the epidemic infection risk depends on the viral latent period and the event time, the time infection occurs. The infection risk follows the dynamics of the infected population. As the latency period decreases, infection risk increases. The longer a susceptible is present in the epidemic, the higher is its risk of infection by equal exposure time to the mode of transmission.
[ { "created": "Thu, 28 Sep 2023 15:19:39 GMT", "version": "v1" }, { "created": "Fri, 22 Dec 2023 17:03:48 GMT", "version": "v2" } ]
2024-08-01
[ [ "Drossinos", "Yannis", "" ], [ "Stilianakis", "Nikolaos I.", "" ] ]
Airborne infection risk analysis is usually performed for enclosed spaces where susceptible individuals are exposed to infectious airborne respiratory droplets by inhalation. It is usually based on exponential, dose-response models of which a widely used variant is the Wells-Riley (WR) model. We revisit this infection-risk estimate and extend it to the population level. We use an epidemiological model where the mode of pathogen transmission, either airborne or contact, is explicitly considered. We illustrate the link between epidemiological models and the WR model. We argue that airborne infection quanta are, up to an overall density, airborne infectious respiratory droplets modified by a parameter that depends on biological properties of the pathogen, physical properties of the droplet, and behavioural parameters of the individual. We calculate the time-dependent risk to be infected during the epidemic for two scenarios. We show how the epidemic infection risk depends on the viral latent period and the event time, the time infection occurs. The infection risk follows the dynamics of the infected population. As the latency period decreases, infection risk increases. The longer a susceptible is present in the epidemic, the higher is its risk of infection by equal exposure time to the mode of transmission.
1301.3318
Tomas Tokar
Csilla Ulicna and Jozef Ulicny
Integration of TNF induced apoptosis model and quantitative proteomics data of HeLa cells - our first experience
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The TNF initiated processes are simulated using our integrative computational model with new quantitative proteomics data obtained on HeLa cell line. In spite of fact, that the model development is based on limited experimental information, with missing data estimated by simulations and parametrization using indirect experiments on different cell lines, the actual model with HeLa experimental data provides pleasant agreement between the computationally predicted behavior and experiments. Since no significant departures from experimentally determined functionality of HeLa cells has been observed, this can be taken as encouraging sign, that substantial features of the cell line behavior are reproduced correctly and indicate new possibilities for cell-line specific computational experiments.
[ { "created": "Tue, 15 Jan 2013 12:21:32 GMT", "version": "v1" } ]
2013-01-16
[ [ "Ulicna", "Csilla", "" ], [ "Ulicny", "Jozef", "" ] ]
The TNF initiated processes are simulated using our integrative computational model with new quantitative proteomics data obtained on HeLa cell line. In spite of fact, that the model development is based on limited experimental information, with missing data estimated by simulations and parametrization using indirect experiments on different cell lines, the actual model with HeLa experimental data provides pleasant agreement between the computationally predicted behavior and experiments. Since no significant departures from experimentally determined functionality of HeLa cells has been observed, this can be taken as encouraging sign, that substantial features of the cell line behavior are reproduced correctly and indicate new possibilities for cell-line specific computational experiments.
2203.13132
Yan Yang
Yan Yang and Zakir Hossain and Khandaker Asif and Liyuan Pan and Shafin Rahman and Eric Stone
DPST: De Novo Peptide Sequencing with Amino-Acid-Aware Transformers
null
null
null
null
q-bio.QM cs.LG q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
De novo peptide sequencing aims to recover amino acid sequences of a peptide from tandem mass spectrometry (MS) data. Existing approaches for de novo analysis enumerate MS evidence for all amino acid classes during inference. It leads to over-trimming on receptive fields of MS data and restricts MS evidence associated with following undecoded amino acids. Our approach, DPST, circumvents these limitations with two key components: (1) A confidence value aggregation encoder to sketch spectrum representations according to amino-acid-based connectivity among MS; (2) A global-local fusion decoder to progressively assimilate contextualized spectrum representations with a predefined preconception of localized MS evidence and amino acid priors. Our components originate from a closed-form solution and selectively attend to informative amino-acid-aware MS representations. Through extensive empirical studies, we demonstrate the superiority of DPST, showing that it outperforms state-of-the-art approaches by a margin of 12% - 19% peptide accuracy.
[ { "created": "Wed, 23 Mar 2022 08:01:06 GMT", "version": "v1" } ]
2022-03-25
[ [ "Yang", "Yan", "" ], [ "Hossain", "Zakir", "" ], [ "Asif", "Khandaker", "" ], [ "Pan", "Liyuan", "" ], [ "Rahman", "Shafin", "" ], [ "Stone", "Eric", "" ] ]
De novo peptide sequencing aims to recover amino acid sequences of a peptide from tandem mass spectrometry (MS) data. Existing approaches for de novo analysis enumerate MS evidence for all amino acid classes during inference. It leads to over-trimming on receptive fields of MS data and restricts MS evidence associated with following undecoded amino acids. Our approach, DPST, circumvents these limitations with two key components: (1) A confidence value aggregation encoder to sketch spectrum representations according to amino-acid-based connectivity among MS; (2) A global-local fusion decoder to progressively assimilate contextualized spectrum representations with a predefined preconception of localized MS evidence and amino acid priors. Our components originate from a closed-form solution and selectively attend to informative amino-acid-aware MS representations. Through extensive empirical studies, we demonstrate the superiority of DPST, showing that it outperforms state-of-the-art approaches by a margin of 12% - 19% peptide accuracy.
0912.0750
Aran Nayebi
Aran Nayebi
Fast matrix multiplication techniques based on the Adleman-Lipton model
To appear in the International Journal of Computer Engineering Research. Minor changes made to make the preprint as similar as possible to the published version
International Journal of Computer Engineering Research, 3(1):10-19, January 2012
10.5897/IJCER10.016
null
q-bio.QM cs.DS cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
On distributed memory electronic computers, the implementation and association of fast parallel matrix multiplication algorithms has yielded astounding results and insights. In this discourse, we use the tools of molecular biology to demonstrate the theoretical encoding of Strassen's fast matrix multiplication algorithm with DNA based on an $n$-moduli set in the residue number system, thereby demonstrating the viability of computational mathematics with DNA. As a result, a general scalable implementation of this model in the DNA computing paradigm is presented and can be generalized to the application of \emph{all} fast matrix multiplication algorithms on a DNA computer. We also discuss the practical capabilities and issues of this scalable implementation. Fast methods of matrix computations with DNA are important because they also allow for the efficient implementation of other algorithms (i.e. inversion, computing determinants, and graph theory) with DNA.
[ { "created": "Thu, 3 Dec 2009 23:04:18 GMT", "version": "v1" }, { "created": "Sun, 5 Sep 2010 16:43:23 GMT", "version": "v2" }, { "created": "Wed, 16 Feb 2011 15:44:55 GMT", "version": "v3" }, { "created": "Sat, 28 May 2011 16:25:32 GMT", "version": "v4" }, { "created": "Mon, 19 Dec 2011 04:17:30 GMT", "version": "v5" } ]
2012-02-10
[ [ "Nayebi", "Aran", "" ] ]
On distributed memory electronic computers, the implementation and association of fast parallel matrix multiplication algorithms has yielded astounding results and insights. In this discourse, we use the tools of molecular biology to demonstrate the theoretical encoding of Strassen's fast matrix multiplication algorithm with DNA based on an $n$-moduli set in the residue number system, thereby demonstrating the viability of computational mathematics with DNA. As a result, a general scalable implementation of this model in the DNA computing paradigm is presented and can be generalized to the application of \emph{all} fast matrix multiplication algorithms on a DNA computer. We also discuss the practical capabilities and issues of this scalable implementation. Fast methods of matrix computations with DNA are important because they also allow for the efficient implementation of other algorithms (i.e. inversion, computing determinants, and graph theory) with DNA.
q-bio/0401011
Eivind T{\o}stesen
E. Tostesen, F. Liu, T.-K. Jenssen, E. Hovig
Speed-Up of DNA Melting Algorithm with Complete Nearest Neighbor Properties
20 pages, 4 figures
Biopolymers, 70, 364-376 (2003)
10.1002/bip.10495
null
q-bio.BM
null
We describe a faster and more accurate algorithm for computing the statistical mechanics of DNA denaturation according to the Poland-Scheraga type. Nearest neighbor thermodynamics is included in a complete and general way. The algorithm represents an optimization with respect to algorithmic complexity of the partition function algorithm of Yeramian et al.: We reduce the computation time for a base-pairing probability profile from O(N2) to O(N). This speed-up comes in addition to the speed-up due to a multiexponential approximation of the loop entropy factor as introduced by Fixman and Freire. The speed-up, however, is independent of the multiexponential approximation and reduces time from O(N3) to O(N2) in the exact case. In addition to calculating the standard base-pairing probability profiles, we propose to use the algorithm to calculate various other probabilities (loops, helices, tails) for a more direct view of the melting regions and their positions and sizes.
[ { "created": "Thu, 8 Jan 2004 12:05:22 GMT", "version": "v1" } ]
2007-05-23
[ [ "Tostesen", "E.", "" ], [ "Liu", "F.", "" ], [ "Jenssen", "T. -K.", "" ], [ "Hovig", "E.", "" ] ]
We describe a faster and more accurate algorithm for computing the statistical mechanics of DNA denaturation according to the Poland-Scheraga type. Nearest neighbor thermodynamics is included in a complete and general way. The algorithm represents an optimization with respect to algorithmic complexity of the partition function algorithm of Yeramian et al.: We reduce the computation time for a base-pairing probability profile from O(N2) to O(N). This speed-up comes in addition to the speed-up due to a multiexponential approximation of the loop entropy factor as introduced by Fixman and Freire. The speed-up, however, is independent of the multiexponential approximation and reduces time from O(N3) to O(N2) in the exact case. In addition to calculating the standard base-pairing probability profiles, we propose to use the algorithm to calculate various other probabilities (loops, helices, tails) for a more direct view of the melting regions and their positions and sizes.
q-bio/0609027
Antonio Leon
Antonio Leon
Coevolution. Extending Prigogine Theorem
16 pages, 9 figures
null
null
null
q-bio.PE math.DS
null
The formal consideration of the concept of interaction in thermodynamic analysis makes it possible to deduce, in the broadest terms, new results related to the coevolution of interacting systems, irrespective of their distance from thermodynamic equilibrium. In this paper I prove the existence of privileged coevolution trajectories characterized by the minimum joint production of internal entropy, a conclusion that extends Prigogine theorem to systems evolving far from thermodynamic equilibrium. Along these trajectories of minimum internal entropy production one of the system goes always ahead of the other with respect to equilibrium.
[ { "created": "Tue, 19 Sep 2006 10:31:13 GMT", "version": "v1" }, { "created": "Wed, 30 Apr 2008 18:33:32 GMT", "version": "v2" } ]
2008-05-04
[ [ "Leon", "Antonio", "" ] ]
The formal consideration of the concept of interaction in thermodynamic analysis makes it possible to deduce, in the broadest terms, new results related to the coevolution of interacting systems, irrespective of their distance from thermodynamic equilibrium. In this paper I prove the existence of privileged coevolution trajectories characterized by the minimum joint production of internal entropy, a conclusion that extends Prigogine theorem to systems evolving far from thermodynamic equilibrium. Along these trajectories of minimum internal entropy production one of the system goes always ahead of the other with respect to equilibrium.
2004.01602
David F. Nettleton
David F. Nettleton, Dimitrios Katsantonis, Argyris Kalaitzidis, Natasa Sarafijanovic-Djukic, Pau Puigdollers and Roberto Confalonieri
Predicting rice blast disease: machine learning versus process based models
null
BMC Bioinformatics volume 20, Article number: 514 (2019)
10.1186/s12859-019-3065-1
null
q-bio.QM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rice is the second most important cereal crop worldwide, and the first in terms of number of people who depend on it as a major staple food. Rice blast disease is the most important biotic constraint of rice cultivation causing each year millions of dollars of losses. Despite the efforts for breeding new resistant varieties, agricultural practices and chemical control are still the most important methods for disease management. Thus, rice blast forecasting is a primary tool to support rice growers in controlling the disease. In this study, we compared four models for predicting rice blast disease, two operational process-based models (Yoshino and WARM) and two approaches based on machine learning algorithms (M5Rules and RNN), the former inducing a rule-based model and the latter building a neural network. In situ telemetry is important to obtain quality in-field data for predictive models and this was a key aspect of the RICE-GUARD project on which this study is based. According to the authors, this is the first time process-based and machine learning modelling approaches for supporting plant disease management are compared.
[ { "created": "Fri, 3 Apr 2020 14:48:14 GMT", "version": "v1" } ]
2021-01-07
[ [ "Nettleton", "David F.", "" ], [ "Katsantonis", "Dimitrios", "" ], [ "Kalaitzidis", "Argyris", "" ], [ "Sarafijanovic-Djukic", "Natasa", "" ], [ "Puigdollers", "Pau", "" ], [ "Confalonieri", "Roberto", "" ] ]
Rice is the second most important cereal crop worldwide, and the first in terms of number of people who depend on it as a major staple food. Rice blast disease is the most important biotic constraint of rice cultivation causing each year millions of dollars of losses. Despite the efforts for breeding new resistant varieties, agricultural practices and chemical control are still the most important methods for disease management. Thus, rice blast forecasting is a primary tool to support rice growers in controlling the disease. In this study, we compared four models for predicting rice blast disease, two operational process-based models (Yoshino and WARM) and two approaches based on machine learning algorithms (M5Rules and RNN), the former inducing a rule-based model and the latter building a neural network. In situ telemetry is important to obtain quality in-field data for predictive models and this was a key aspect of the RICE-GUARD project on which this study is based. According to the authors, this is the first time process-based and machine learning modelling approaches for supporting plant disease management are compared.
1606.08363
Matthew Spencer
Katherine A. Allen, John F. Bruno, Fiona Chong, Damian Clancy, Tim R. McClanahan, Matthew Spencer, Kamila Zychaluk
Among-site variability in the stochastic dynamics of East African coral reefs
97 pages, 49 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Coral reefs are dynamic systems whose composition is highly influenced by unpredictable biotic and abiotic factors. Understanding the spatial scale at which long-term predictions of reef composition can be made will be crucial for guiding conservation efforts. Using a 22-year time series of benthic composition data from 20 reefs on the Kenyan and Tanzanian coast, we studied the long-term behaviour of Bayesian vector autoregressive state-space models for reef dynamics, incorporating among-site variability. We estimate that if there were no among-site variability, the total long-term variability would be approximately one third of its current value. Thus among-site variability contributes more to long-term variability in reef composition than does temporal variability. Individual sites are more predictable than previously thought, and predictions based on current snapshots are informative about long-term properties. Our approach allowed us to identify a subset of possible climate refugia sites with high conservation value, where the long-term probability of coral cover <= 0.1 was very low. Analytical results show that this probability is most strongly influenced by among-site variability and by interactions among benthic components within sites. These findings suggest that conservation initiatives might be successful at the site scale as well as the regional scale.
[ { "created": "Mon, 27 Jun 2016 16:44:52 GMT", "version": "v1" } ]
2016-06-28
[ [ "Allen", "Katherine A.", "" ], [ "Bruno", "John F.", "" ], [ "Chong", "Fiona", "" ], [ "Clancy", "Damian", "" ], [ "McClanahan", "Tim R.", "" ], [ "Spencer", "Matthew", "" ], [ "Zychaluk", "Kamila", "" ] ]
Coral reefs are dynamic systems whose composition is highly influenced by unpredictable biotic and abiotic factors. Understanding the spatial scale at which long-term predictions of reef composition can be made will be crucial for guiding conservation efforts. Using a 22-year time series of benthic composition data from 20 reefs on the Kenyan and Tanzanian coast, we studied the long-term behaviour of Bayesian vector autoregressive state-space models for reef dynamics, incorporating among-site variability. We estimate that if there were no among-site variability, the total long-term variability would be approximately one third of its current value. Thus among-site variability contributes more to long-term variability in reef composition than does temporal variability. Individual sites are more predictable than previously thought, and predictions based on current snapshots are informative about long-term properties. Our approach allowed us to identify a subset of possible climate refugia sites with high conservation value, where the long-term probability of coral cover <= 0.1 was very low. Analytical results show that this probability is most strongly influenced by among-site variability and by interactions among benthic components within sites. These findings suggest that conservation initiatives might be successful at the site scale as well as the regional scale.
1403.4086
Hande Topa
Hande Topa, \'Agnes J\'on\'as, Robert Kofler, Carolin Kosiol, Antti Honkela
Gaussian process test for high-throughput sequencing time series: application to experimental evolution
41 pages, 29 figures
null
null
null
q-bio.PE q-bio.GN q-bio.QM stat.AP
http://creativecommons.org/licenses/by/3.0/
Motivation: Recent advances in high-throughput sequencing (HTS) have made it possible to monitor genomes in great detail. New experiments not only use HTS to measure genomic features at one time point but to monitor them changing over time with the aim of identifying significant changes in their abundance. In population genetics, for example, allele frequencies are monitored over time to detect significant frequency changes that indicate selection pressures. Previous attempts at analysing data from HTS experiments have been limited as they could not simultaneously include data at intermediate time points, replicate experiments and sources of uncertainty specific to HTS such as sequencing depth. Results: We present the beta-binomial Gaussian process (BBGP) model for ranking features with significant non-random variation in abundance over time. The features are assumed to represent proportions, such as proportion of an alternative allele in a population. We use the beta-binomial model to capture the uncertainty arising from finite sequencing depth and combine it with a Gaussian process model over the time series. In simulations that mimic the features of experimental evolution data, the proposed method clearly outperforms classical testing in average precision of finding selected alleles. We also present simulations exploring different experimental design choices and results on real data from Drosophila experimental evolution experiment in temperature adaptation. Availability: R software implementing the test is available at https://github.com/handetopa/BBGP
[ { "created": "Mon, 17 Mar 2014 13:08:06 GMT", "version": "v1" }, { "created": "Tue, 18 Mar 2014 14:14:25 GMT", "version": "v2" }, { "created": "Thu, 18 Sep 2014 07:59:09 GMT", "version": "v3" } ]
2014-09-19
[ [ "Topa", "Hande", "" ], [ "Jónás", "Ágnes", "" ], [ "Kofler", "Robert", "" ], [ "Kosiol", "Carolin", "" ], [ "Honkela", "Antti", "" ] ]
Motivation: Recent advances in high-throughput sequencing (HTS) have made it possible to monitor genomes in great detail. New experiments not only use HTS to measure genomic features at one time point but to monitor them changing over time with the aim of identifying significant changes in their abundance. In population genetics, for example, allele frequencies are monitored over time to detect significant frequency changes that indicate selection pressures. Previous attempts at analysing data from HTS experiments have been limited as they could not simultaneously include data at intermediate time points, replicate experiments and sources of uncertainty specific to HTS such as sequencing depth. Results: We present the beta-binomial Gaussian process (BBGP) model for ranking features with significant non-random variation in abundance over time. The features are assumed to represent proportions, such as proportion of an alternative allele in a population. We use the beta-binomial model to capture the uncertainty arising from finite sequencing depth and combine it with a Gaussian process model over the time series. In simulations that mimic the features of experimental evolution data, the proposed method clearly outperforms classical testing in average precision of finding selected alleles. We also present simulations exploring different experimental design choices and results on real data from Drosophila experimental evolution experiment in temperature adaptation. Availability: R software implementing the test is available at https://github.com/handetopa/BBGP
2303.01514
Chris Fields
Chris Fields, Filippo Fabrocini, Karl Friston, James F. Glazebrook, Hananel Hazan, Michael Levin, and Antonino Marciano
Control flow in active inference systems
44 pgs
null
null
null
q-bio.NC physics.bio-ph quant-ph
http://creativecommons.org/licenses/by/4.0/
Living systems face both environmental complexity and limited access to free-energy resources. Survival under these conditions requires a control system that can activate, or deploy, available perception and action resources in a context specific way. We show here that when systems are described as executing active inference driven by the free-energy principle (and hence can be considered Bayesian prediction-error minimizers), their control flow systems can always be represented as tensor networks (TNs). We show how TNs as control systems can be implmented within the general framework of quantum topological neural networks, and discuss the implications of these results for modeling biological systems at multiple scales.
[ { "created": "Sat, 25 Feb 2023 02:31:47 GMT", "version": "v1" } ]
2023-03-06
[ [ "Fields", "Chris", "" ], [ "Fabrocini", "Filippo", "" ], [ "Friston", "Karl", "" ], [ "Glazebrook", "James F.", "" ], [ "Hazan", "Hananel", "" ], [ "Levin", "Michael", "" ], [ "Marciano", "Antonino", "" ] ]
Living systems face both environmental complexity and limited access to free-energy resources. Survival under these conditions requires a control system that can activate, or deploy, available perception and action resources in a context specific way. We show here that when systems are described as executing active inference driven by the free-energy principle (and hence can be considered Bayesian prediction-error minimizers), their control flow systems can always be represented as tensor networks (TNs). We show how TNs as control systems can be implmented within the general framework of quantum topological neural networks, and discuss the implications of these results for modeling biological systems at multiple scales.
1709.02268
Giuseppe Jurman
Diego Fioravanti, Ylenia Giarratano, Valerio Maggio, Claudio Agostinelli, Marco Chierici, Giuseppe Jurman and Cesare Furlanello
Phylogenetic Convolutional Neural Networks in Metagenomics
Presented at BMTL 2017, Naples
null
null
null
q-bio.QM cs.LG cs.NE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Convolutional Neural Networks can be effectively used only when data are endowed with an intrinsic concept of neighbourhood in the input space, as is the case of pixels in images. We introduce here Ph-CNN, a novel deep learning architecture for the classification of metagenomics data based on the Convolutional Neural Networks, with the patristic distance defined on the phylogenetic tree being used as the proximity measure. The patristic distance between variables is used together with a sparsified version of MultiDimensional Scaling to embed the phylogenetic tree in a Euclidean space. Results: Ph-CNN is tested with a domain adaptation approach on synthetic data and on a metagenomics collection of gut microbiota of 38 healthy subjects and 222 Inflammatory Bowel Disease patients, divided in 6 subclasses. Classification performance is promising when compared to classical algorithms like Support Vector Machines and Random Forest and a baseline fully connected neural network, e.g. the Multi-Layer Perceptron. Conclusion: Ph-CNN represents a novel deep learning approach for the classification of metagenomics data. Operatively, the algorithm has been implemented as a custom Keras layer taking care of passing to the following convolutional layer not only the data but also the ranked list of neighbourhood of each sample, thus mimicking the case of image data, transparently to the user. Keywords: Metagenomics; Deep learning; Convolutional Neural Networks; Phylogenetic trees
[ { "created": "Wed, 6 Sep 2017 12:59:14 GMT", "version": "v1" } ]
2017-09-08
[ [ "Fioravanti", "Diego", "" ], [ "Giarratano", "Ylenia", "" ], [ "Maggio", "Valerio", "" ], [ "Agostinelli", "Claudio", "" ], [ "Chierici", "Marco", "" ], [ "Jurman", "Giuseppe", "" ], [ "Furlanello", "Cesare", "" ] ]
Background: Convolutional Neural Networks can be effectively used only when data are endowed with an intrinsic concept of neighbourhood in the input space, as is the case of pixels in images. We introduce here Ph-CNN, a novel deep learning architecture for the classification of metagenomics data based on the Convolutional Neural Networks, with the patristic distance defined on the phylogenetic tree being used as the proximity measure. The patristic distance between variables is used together with a sparsified version of MultiDimensional Scaling to embed the phylogenetic tree in a Euclidean space. Results: Ph-CNN is tested with a domain adaptation approach on synthetic data and on a metagenomics collection of gut microbiota of 38 healthy subjects and 222 Inflammatory Bowel Disease patients, divided in 6 subclasses. Classification performance is promising when compared to classical algorithms like Support Vector Machines and Random Forest and a baseline fully connected neural network, e.g. the Multi-Layer Perceptron. Conclusion: Ph-CNN represents a novel deep learning approach for the classification of metagenomics data. Operatively, the algorithm has been implemented as a custom Keras layer taking care of passing to the following convolutional layer not only the data but also the ranked list of neighbourhood of each sample, thus mimicking the case of image data, transparently to the user. Keywords: Metagenomics; Deep learning; Convolutional Neural Networks; Phylogenetic trees
2303.08200
Jacek Mi\c{e}kisz
Jacek Mi\c{e}kisz, Javad Mohamadichamgavi, Raffi Vardanyan
Small time delay approximation in replicator dynamics
8 pages, 2 figures
null
null
null
q-bio.PE math.DS physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
We present a microscopic model of replicator dynamics with strategy-dependent time delays. In such a model, new players are born from parents who interacted and received payoffs in the past. In the case of small delays, we use Taylor expansion to get ordinary differential equations for frequencies of strategies with time delays as parameters. We apply our technique to get analytic expressions for interior stationary states in two games: Snowdrift and Stag-hunt. We show that interior stationary states depend continuously upon time delays. Our analytic formulas for stationary states approximate well exact numerical results for small time delays.
[ { "created": "Tue, 14 Mar 2023 19:39:37 GMT", "version": "v1" } ]
2023-04-07
[ [ "Miȩkisz", "Jacek", "" ], [ "Mohamadichamgavi", "Javad", "" ], [ "Vardanyan", "Raffi", "" ] ]
We present a microscopic model of replicator dynamics with strategy-dependent time delays. In such a model, new players are born from parents who interacted and received payoffs in the past. In the case of small delays, we use Taylor expansion to get ordinary differential equations for frequencies of strategies with time delays as parameters. We apply our technique to get analytic expressions for interior stationary states in two games: Snowdrift and Stag-hunt. We show that interior stationary states depend continuously upon time delays. Our analytic formulas for stationary states approximate well exact numerical results for small time delays.
1101.1311
Mike Steel Prof.
Sha Zhu, James H. Degnan, Mike Steel
Clades, clans and reciprocal monophyly under neutral evolutionary models
18 pages, 4 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Yule model and the coalescent model are two neutral stochastic models for generating trees in phylogenetics and population genetics, respectively. Although these models are quite different, they lead to identical distributions concerning the probability that pre-specified groups of taxa form monophyletic groups (clades) in the tree. We extend earlier work to derive exact formulae for the probability of finding one or more groups of taxa as clades in a rooted tree, or as `clans' in an unrooted tree. Our findings are relevant for calculating the statistical significance of observed monophyly and reciprocal monophyly in phylogenetics.
[ { "created": "Thu, 6 Jan 2011 21:17:38 GMT", "version": "v1" } ]
2015-03-17
[ [ "Zhu", "Sha", "" ], [ "Degnan", "James H.", "" ], [ "Steel", "Mike", "" ] ]
The Yule model and the coalescent model are two neutral stochastic models for generating trees in phylogenetics and population genetics, respectively. Although these models are quite different, they lead to identical distributions concerning the probability that pre-specified groups of taxa form monophyletic groups (clades) in the tree. We extend earlier work to derive exact formulae for the probability of finding one or more groups of taxa as clades in a rooted tree, or as `clans' in an unrooted tree. Our findings are relevant for calculating the statistical significance of observed monophyly and reciprocal monophyly in phylogenetics.
1210.0024
Kimberly Glass
Kimberly Glass and Michelle Girvan
Finding New Order in Biological Functions from the Network Structure of Gene Annotations
null
null
null
null
q-bio.QM q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Gene Ontology (GO) provides biologists with a controlled terminology that describes how genes are associated with functions and how functional terms are related to each other. These term-term relationships encode how scientists conceive the organization of biological functions, and they take the form of a directed acyclic graph (DAG). Here, we propose that the network structure of gene-term annotations made using GO can be employed to establish an alternate natural way to group the functional terms which is different from the hierarchical structure established in the GO DAG. Instead of relying on an externally defined organization for biological functions, our method connects biological functions together if they are performed by the same genes, as indicated in a compendium of gene annotation data from numerous different experiments. We show that grouping terms by this alternate scheme is distinct from term relationships defined in the ontological structure and provides a new framework with which to describe and predict the functions of experimentally identified sets of genes.
[ { "created": "Fri, 28 Sep 2012 20:18:59 GMT", "version": "v1" }, { "created": "Fri, 3 May 2013 21:47:39 GMT", "version": "v2" } ]
2013-05-07
[ [ "Glass", "Kimberly", "" ], [ "Girvan", "Michelle", "" ] ]
The Gene Ontology (GO) provides biologists with a controlled terminology that describes how genes are associated with functions and how functional terms are related to each other. These term-term relationships encode how scientists conceive the organization of biological functions, and they take the form of a directed acyclic graph (DAG). Here, we propose that the network structure of gene-term annotations made using GO can be employed to establish an alternate natural way to group the functional terms which is different from the hierarchical structure established in the GO DAG. Instead of relying on an externally defined organization for biological functions, our method connects biological functions together if they are performed by the same genes, as indicated in a compendium of gene annotation data from numerous different experiments. We show that grouping terms by this alternate scheme is distinct from term relationships defined in the ontological structure and provides a new framework with which to describe and predict the functions of experimentally identified sets of genes.
0812.1274
Suan Li Mai
Mai Suan Li, D. K. Klimov, J. E. Straub, and D. Thirumalai
Probing the Mechanisms of Fibril Formation Using Lattice Models
27 pages, 6 figures
J. Chem. Phys. 129, 175101 (2008)
10.1063/1.2989981
null
q-bio.BM q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using exhaustive Monte Carlo simulations we study the kinetics and mechanism of fibril formation using lattice models as a function of temperature and the number of chains. While these models are, at best, caricatures of peptides, we show that a number of generic features thought to govern fibril assembly are present in the toy model. The monomer, which contains eight beads made from three letters (hydrophobic, polar, and charged), adopts a compact conformation in the native state. The kinetics of fibril assembly occurs in three distinct stages. In each stage there is a cascade of events that transforms the monomers and oligomers to ordered structures. In the first "burst" stage highly mobile oligomers of varying sizes form. The conversion to the aggregation-prone conformation occurs within the oligomers during the second stage. As time progresses, a dominant cluster emerges that contains a majority of the chains. In the final stage, the aggregation-prone conformation particles serve as a template onto which smaller oligomers or monomers can dock and undergo conversion to fibril structures. The overall time for growth in the latter stages is well described by the Lifshitz-Slyazov growth kinetics for crystallization from super-saturated solutions.
[ { "created": "Sat, 6 Dec 2008 10:36:28 GMT", "version": "v1" } ]
2009-11-13
[ [ "Li", "Mai Suan", "" ], [ "Klimov", "D. K.", "" ], [ "Straub", "J. E.", "" ], [ "Thirumalai", "D.", "" ] ]
Using exhaustive Monte Carlo simulations we study the kinetics and mechanism of fibril formation using lattice models as a function of temperature and the number of chains. While these models are, at best, caricatures of peptides, we show that a number of generic features thought to govern fibril assembly are present in the toy model. The monomer, which contains eight beads made from three letters (hydrophobic, polar, and charged), adopts a compact conformation in the native state. The kinetics of fibril assembly occurs in three distinct stages. In each stage there is a cascade of events that transforms the monomers and oligomers to ordered structures. In the first "burst" stage highly mobile oligomers of varying sizes form. The conversion to the aggregation-prone conformation occurs within the oligomers during the second stage. As time progresses, a dominant cluster emerges that contains a majority of the chains. In the final stage, the aggregation-prone conformation particles serve as a template onto which smaller oligomers or monomers can dock and undergo conversion to fibril structures. The overall time for growth in the latter stages is well described by the Lifshitz-Slyazov growth kinetics for crystallization from super-saturated solutions.
2207.12124
Mohammad Alali
Mohammad Alali and Mahdi Imani
Inference of Regulatory Networks Through Temporally Sparse Data
9 Pages, 6 Figures
Front Control Eng. 2022;3
10.3389/fcteg.2022.1017256
null
q-bio.MN cs.LG stat.ME stat.ML
http://creativecommons.org/licenses/by/4.0/
A major goal in genomics is to properly capture the complex dynamical behaviors of gene regulatory networks (GRNs). This includes inferring the complex interactions between genes, which can be used for a wide range of genomics analyses, including diagnosis or prognosis of diseases and finding effective treatments for chronic diseases such as cancer. Boolean networks have emerged as a successful class of models for capturing the behavior of GRNs. In most practical settings, inference of GRNs should be achieved through limited and temporally sparse genomics data. A large number of genes in GRNs leads to a large possible topology candidate space, which often cannot be exhaustively searched due to the limitation in computational resources. This paper develops a scalable and efficient topology inference for GRNs using Bayesian optimization and kernel-based methods. Rather than an exhaustive search over possible topologies, the proposed method constructs a Gaussian Process (GP) with a topology-inspired kernel function to account for correlation in the likelihood function. Then, using the posterior distribution of the GP model, the Bayesian optimization efficiently searches for the topology with the highest likelihood value by optimally balancing between exploration and exploitation. The performance of the proposed method is demonstrated through comprehensive numerical experiments using a well-known mammalian cell-cycle network.
[ { "created": "Thu, 21 Jul 2022 22:48:12 GMT", "version": "v1" } ]
2023-01-18
[ [ "Alali", "Mohammad", "" ], [ "Imani", "Mahdi", "" ] ]
A major goal in genomics is to properly capture the complex dynamical behaviors of gene regulatory networks (GRNs). This includes inferring the complex interactions between genes, which can be used for a wide range of genomics analyses, including diagnosis or prognosis of diseases and finding effective treatments for chronic diseases such as cancer. Boolean networks have emerged as a successful class of models for capturing the behavior of GRNs. In most practical settings, inference of GRNs should be achieved through limited and temporally sparse genomics data. A large number of genes in GRNs leads to a large possible topology candidate space, which often cannot be exhaustively searched due to the limitation in computational resources. This paper develops a scalable and efficient topology inference for GRNs using Bayesian optimization and kernel-based methods. Rather than an exhaustive search over possible topologies, the proposed method constructs a Gaussian Process (GP) with a topology-inspired kernel function to account for correlation in the likelihood function. Then, using the posterior distribution of the GP model, the Bayesian optimization efficiently searches for the topology with the highest likelihood value by optimally balancing between exploration and exploitation. The performance of the proposed method is demonstrated through comprehensive numerical experiments using a well-known mammalian cell-cycle network.
2307.14376
Armita Nourmohammad
Giulio Isacchini, Valentin Quiniou, H\'el\`ene Vantomme, Paul Stys, Encarnita Mariotti-Ferandiz, David Klatzmann, Aleksandra M. Walczak, Thierry Mora, and Armita Nourmohammad
Variability in the local and global composition of human T-cell receptor repertoires during thymic development across cell types and individuals
null
null
null
null
q-bio.QM q-bio.MN
http://creativecommons.org/licenses/by-nc-nd/4.0/
The adaptive immune response relies on T cells that combine phenotypic specialization with diversity of T cell receptors (TCRs) to recognize a wide range of pathogens. TCRs are acquired and selected during T cell maturation in the thymus. Characterizing TCR repertoires across individuals and T cell maturation stages is important for better understanding adaptive immune responses and for developing new diagnostics and therapies. Analyzing a dataset of human TCR repertoires from thymocyte subsets, we find that the variability between individuals generated during the TCR V(D)J recombination is maintained through all stages of T cell maturation and differentiation. The inter-individual variability of repertoires of the same cell type is of comparable magnitude to the variability across cell types within the same individual. To zoom in on smaller scales than whole repertoires, we defined a distance measuring the relative overlap of locally similar sequences in repertoires. We find that the whole repertoire models correctly predict local similarity networks, suggesting a lack of forbidden T cell receptor sequences. The local measure correlates well with distances calculated using whole repertoire traits and carries information about cell types.
[ { "created": "Tue, 25 Jul 2023 17:17:47 GMT", "version": "v1" } ]
2023-07-28
[ [ "Isacchini", "Giulio", "" ], [ "Quiniou", "Valentin", "" ], [ "Vantomme", "Hélène", "" ], [ "Stys", "Paul", "" ], [ "Mariotti-Ferandiz", "Encarnita", "" ], [ "Klatzmann", "David", "" ], [ "Walczak", "Aleksandra M.", "" ], [ "Mora", "Thierry", "" ], [ "Nourmohammad", "Armita", "" ] ]
The adaptive immune response relies on T cells that combine phenotypic specialization with diversity of T cell receptors (TCRs) to recognize a wide range of pathogens. TCRs are acquired and selected during T cell maturation in the thymus. Characterizing TCR repertoires across individuals and T cell maturation stages is important for better understanding adaptive immune responses and for developing new diagnostics and therapies. Analyzing a dataset of human TCR repertoires from thymocyte subsets, we find that the variability between individuals generated during the TCR V(D)J recombination is maintained through all stages of T cell maturation and differentiation. The inter-individual variability of repertoires of the same cell type is of comparable magnitude to the variability across cell types within the same individual. To zoom in on smaller scales than whole repertoires, we defined a distance measuring the relative overlap of locally similar sequences in repertoires. We find that the whole repertoire models correctly predict local similarity networks, suggesting a lack of forbidden T cell receptor sequences. The local measure correlates well with distances calculated using whole repertoire traits and carries information about cell types.
q-bio/0404006
Dorjsuren Battogtokh
Dorjsuren Battogtokh and John J. Tyson
Bifurcation analysis of a model of the budding yeast cell cycle
31 pages,13 figures
null
10.1063/1.1780011
null
q-bio.MN
null
We study the bifurcations of a set of nine nonlinear ordinary differential equations that describe the regulation of the cyclin-dependent kinase that triggers DNA synthesis and mitosis in the budding yeast, Saccharomyces cerevisiae. We show that Clb2-dependent kinase exhibits bistability (stable steady states of high or low kinase activity). The transition from low to high Clb2-dependent kinase activity is driven by transient activation of Cln2-dependent kinase, and the reverse transition is driven by transient activation of the Clb2 degradation machinery. We show that a four-variable model retains the main features of the nine-variable model. In a three-variable model exhibiting birhythmicity (two stable oscillatory states), we explore possible effects of extrinsic fluctuations on cell cycle progression.
[ { "created": "Mon, 5 Apr 2004 17:59:03 GMT", "version": "v1" } ]
2009-11-10
[ [ "Battogtokh", "Dorjsuren", "" ], [ "Tyson", "John J.", "" ] ]
We study the bifurcations of a set of nine nonlinear ordinary differential equations that describe the regulation of the cyclin-dependent kinase that triggers DNA synthesis and mitosis in the budding yeast, Saccharomyces cerevisiae. We show that Clb2-dependent kinase exhibits bistability (stable steady states of high or low kinase activity). The transition from low to high Clb2-dependent kinase activity is driven by transient activation of Cln2-dependent kinase, and the reverse transition is driven by transient activation of the Clb2 degradation machinery. We show that a four-variable model retains the main features of the nine-variable model. In a three-variable model exhibiting birhythmicity (two stable oscillatory states), we explore possible effects of extrinsic fluctuations on cell cycle progression.
1108.2011
Joachim Mathiesen
Joachim Mathiesen, Namiko Mitarai, Kim Sneppen and Ala Trusina
Ecosystems with mutually exclusive interactions self-organize to a state of high diversity
4 pages, 4 figures
Phys.Rev.Lett. 107, 188101 (2011)
10.1103/PhysRevLett.107.188101
null
q-bio.PE cond-mat.stat-mech nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ecological systems comprise an astonishing diversity of species that cooperate or compete with each other forming complex mutual dependencies. The minimum requirements to maintain a large species diversity on long time scales are in general unknown. Using lichen communities as an example, we propose a model for the evolution of mutually excluding organisms that compete for space. We suggest that chain-like or cyclic invasions involving three or more species open for creation of spatially separated sub-populations that subsequently can lead to increased diversity. In contrast to its non-spatial counterpart, our model predicts robust co-existence of a large number of species, in accordance with observations on lichen growth. It is demonstrated that large species diversity can be obtained on evolutionary timescales, provided that interactions between species have spatial constraints. In particular, a phase transition to a sustainable state of high diversity is identified.
[ { "created": "Mon, 8 Aug 2011 14:56:06 GMT", "version": "v1" } ]
2012-09-10
[ [ "Mathiesen", "Joachim", "" ], [ "Mitarai", "Namiko", "" ], [ "Sneppen", "Kim", "" ], [ "Trusina", "Ala", "" ] ]
Ecological systems comprise an astonishing diversity of species that cooperate or compete with each other forming complex mutual dependencies. The minimum requirements to maintain a large species diversity on long time scales are in general unknown. Using lichen communities as an example, we propose a model for the evolution of mutually excluding organisms that compete for space. We suggest that chain-like or cyclic invasions involving three or more species open for creation of spatially separated sub-populations that subsequently can lead to increased diversity. In contrast to its non-spatial counterpart, our model predicts robust co-existence of a large number of species, in accordance with observations on lichen growth. It is demonstrated that large species diversity can be obtained on evolutionary timescales, provided that interactions between species have spatial constraints. In particular, a phase transition to a sustainable state of high diversity is identified.
2003.01385
Marc de Kamps
Hugh Osborne and Yi Ming Lai and Marc de Kamps
Models Currently Implemented in MIIND
null
null
null
null
q-bio.NC physics.comp-ph
http://creativecommons.org/licenses/by/4.0/
This is a living document that will be updated when appropriate. MIIND [1, 2] is a population-level neural simulator. It is based on population density techniques, just like DIPDE [3]. Contrary to DIPDE, MIIND is agnostic to the underlying neuron model used in its populations so any 1, 2 or 3 dimensional model can be set up with minimal effort. The resulting populations can then be grouped into large networks, e.g. the Potjans-Diesmann model [4]. The MIIND website http://miind.sf.net contains training materials, and helps to set up MIIND, either by using virtual machines, a DOCKER image, or directly from source code.
[ { "created": "Tue, 3 Mar 2020 08:21:58 GMT", "version": "v1" }, { "created": "Tue, 24 Mar 2020 11:26:24 GMT", "version": "v2" } ]
2020-03-25
[ [ "Osborne", "Hugh", "" ], [ "Lai", "Yi Ming", "" ], [ "de Kamps", "Marc", "" ] ]
This is a living document that will be updated when appropriate. MIIND [1, 2] is a population-level neural simulator. It is based on population density techniques, just like DIPDE [3]. Contrary to DIPDE, MIIND is agnostic to the underlying neuron model used in its populations so any 1, 2 or 3 dimensional model can be set up with minimal effort. The resulting populations can then be grouped into large networks, e.g. the Potjans-Diesmann model [4]. The MIIND website http://miind.sf.net contains training materials, and helps to set up MIIND, either by using virtual machines, a DOCKER image, or directly from source code.
2005.04365
Erkki Somersalo Dr.
Daniela Calvetti, Alexander Hoover, Johnie Rose, Erkki Somersalo
Bayesian dynamical estimation of the parameters of an SE(A)IR COVID-19 spread model
21 pages, 8 figures
null
null
null
q-bio.PE cs.NA math.NA q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article, we consider a dynamic epidemiology model for the spread of the COVID-19 infection. Starting from the classical SEIR model, the model is modified so as to better describe characteristic features of the underlying pathogen and its infectious modes. In line with the large number of secondary infections not related to contact with documented infectious individuals, the model includes a cohort of asymptomatic or oligosymptomatic infectious individuals, not accounted for in the data of new daily counts of infections. A Bayesian particle filtering algorithm is used to update dynamically the relevant cohort and simultaneously estimate the transmission rate as the new data on the number of new infections and disease related death become available. The underlying assumption of the model is that the infectivity rate is dynamically changing during the epidemics, either because of a mutation of the pathogen or in response to mitigation and containment measures. The sequential Bayesian framework naturally provides a quantification of the uncertainty in the estimate of the model parameters, including the reproduction number, and of the size of the different cohorts. Moreover, we introduce a dimensionless quantity, which is the equilibrium ratio between asymptomatic and symptomatic cohort sizes, and propose a simple formula to estimate the quantity. This ratio leads naturally to another dimensionless quantity that plays the role of the basic reproduction number $R_0$ of the model. When we apply the model and particle filter algorithm to COVID-19 infection data from several counties in Northeastern Ohio and Southeastern Michigan we found the proposed reproduction number $R_0$ to have a consistent dynamic behavior within both states, thus proving to be a reliable summary of the success of the mitigation measures.
[ { "created": "Sat, 9 May 2020 05:07:56 GMT", "version": "v1" }, { "created": "Thu, 21 May 2020 17:21:06 GMT", "version": "v2" } ]
2020-05-22
[ [ "Calvetti", "Daniela", "" ], [ "Hoover", "Alexander", "" ], [ "Rose", "Johnie", "" ], [ "Somersalo", "Erkki", "" ] ]
In this article, we consider a dynamic epidemiology model for the spread of the COVID-19 infection. Starting from the classical SEIR model, the model is modified so as to better describe characteristic features of the underlying pathogen and its infectious modes. In line with the large number of secondary infections not related to contact with documented infectious individuals, the model includes a cohort of asymptomatic or oligosymptomatic infectious individuals, not accounted for in the data of new daily counts of infections. A Bayesian particle filtering algorithm is used to update dynamically the relevant cohort and simultaneously estimate the transmission rate as the new data on the number of new infections and disease related death become available. The underlying assumption of the model is that the infectivity rate is dynamically changing during the epidemics, either because of a mutation of the pathogen or in response to mitigation and containment measures. The sequential Bayesian framework naturally provides a quantification of the uncertainty in the estimate of the model parameters, including the reproduction number, and of the size of the different cohorts. Moreover, we introduce a dimensionless quantity, which is the equilibrium ratio between asymptomatic and symptomatic cohort sizes, and propose a simple formula to estimate the quantity. This ratio leads naturally to another dimensionless quantity that plays the role of the basic reproduction number $R_0$ of the model. When we apply the model and particle filter algorithm to COVID-19 infection data from several counties in Northeastern Ohio and Southeastern Michigan we found the proposed reproduction number $R_0$ to have a consistent dynamic behavior within both states, thus proving to be a reliable summary of the success of the mitigation measures.
1303.6793
Nicolae Radu Zabet
Nicolae Radu Zabet and Boris Adryan
The effects of transcription factor competition on gene regulation
This is an updated version of the manuscript taking into account comments from reviewers
Frontiers in Genetics 4:197 (2013)
10.3389/fgene.2013.00197
null
q-bio.SC q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transcription factor (TF) molecules translocate by facilitated diffusion (a combination of 3D diffusion around and 1D random walk on the DNA). Despite the attention this mechanism received in the last 40 years, only a few studies investigated the influence of the cellular environment on the facilitated diffusion mechanism and, in particular, the influence of `other' DNA binding proteins competing with the TF molecules for DNA space. Molecular crowding on the DNA is likely to influence the association rate of TFs to their target site and the steady state occupancy of those sites, but it is still not clear how it influences the search in a genome-wide context, when the model includes biologically relevant parameters (such as: TF abundance, TF affinity for DNA and TF dynamics on the DNA). We performed stochastic simulations of TFs performing the facilitated diffusion mechanism, and considered various abundances of cognate and non-cognate TFs. We show that, for both obstacles that move on the DNA and obstacles that are fixed on the DNA, changes in search time are not statistically significant in case of biologically relevant crowding levels on the DNA. In the case of non-cognate proteins that slide on the DNA, molecular crowding on the DNA always leads to statistically significant lower levels of occupancy, which may confer a general mechanism to control gene activity levels globally. When the `other' molecules are immobile on the DNA, we found a completely different behaviour, namely: the occupancy of the target site is always increased by higher molecular crowding on the DNA. Finally, we show that crowding on the DNA may increase transcriptional noise through increased variability of the occupancy time of the target sites.
[ { "created": "Wed, 27 Mar 2013 11:25:28 GMT", "version": "v1" }, { "created": "Tue, 6 Aug 2013 12:48:23 GMT", "version": "v2" } ]
2013-09-20
[ [ "Zabet", "Nicolae Radu", "" ], [ "Adryan", "Boris", "" ] ]
Transcription factor (TF) molecules translocate by facilitated diffusion (a combination of 3D diffusion around and 1D random walk on the DNA). Despite the attention this mechanism received in the last 40 years, only a few studies investigated the influence of the cellular environment on the facilitated diffusion mechanism and, in particular, the influence of `other' DNA binding proteins competing with the TF molecules for DNA space. Molecular crowding on the DNA is likely to influence the association rate of TFs to their target site and the steady state occupancy of those sites, but it is still not clear how it influences the search in a genome-wide context, when the model includes biologically relevant parameters (such as: TF abundance, TF affinity for DNA and TF dynamics on the DNA). We performed stochastic simulations of TFs performing the facilitated diffusion mechanism, and considered various abundances of cognate and non-cognate TFs. We show that, for both obstacles that move on the DNA and obstacles that are fixed on the DNA, changes in search time are not statistically significant in case of biologically relevant crowding levels on the DNA. In the case of non-cognate proteins that slide on the DNA, molecular crowding on the DNA always leads to statistically significant lower levels of occupancy, which may confer a general mechanism to control gene activity levels globally. When the `other' molecules are immobile on the DNA, we found a completely different behaviour, namely: the occupancy of the target site is always increased by higher molecular crowding on the DNA. Finally, we show that crowding on the DNA may increase transcriptional noise through increased variability of the occupancy time of the target sites.
2204.05857
Amanda Lea
Irene Gallego Romero and Amanda J. Lea
Leveraging massively parallel reporter assays for evolutionary questions
null
null
null
null
q-bio.PE q-bio.GN
http://creativecommons.org/licenses/by-nc-sa/4.0/
A long-standing goal of evolutionary biology is to decode how gene regulatory processes contribute to organismal diversity, both within and between species. This question has remained challenging to answer, due both to the difficulties of predicting function from non-coding sequence, and to the technological constraints of laboratory research with non-model taxa. However, a recent methodological development in functional genomics, the massively parallel reporter assay (MPRA), makes it possible to test thousands to millions of sequences for regulatory activity in a single in vitro experiment. It does so by combining traditional, single-locus episomal reporter assays (e.g., luciferase reporter assays) with the scalability of high-throughput sequencing. In this perspective, we discuss the execution, advantages, and limitations of MPRAs for research in evolutionary biology. We review recent studies that have made use of this approach to address explicitly evolutionary questions, highlighting study designs that we believe are particularly well-positioned to gain from MPRA approaches. Additionally, we propose solutions for extending these powerful assays to rare taxa and those with limited genomic resources. In doing so, we underscore the broad potential of MPRAs to drive genome-scale functional evolutionary genetics studies in non-traditional model organisms.
[ { "created": "Tue, 12 Apr 2022 14:57:40 GMT", "version": "v1" } ]
2022-04-13
[ [ "Romero", "Irene Gallego", "" ], [ "Lea", "Amanda J.", "" ] ]
A long-standing goal of evolutionary biology is to decode how gene regulatory processes contribute to organismal diversity, both within and between species. This question has remained challenging to answer, due both to the difficulties of predicting function from non-coding sequence, and to the technological constraints of laboratory research with non-model taxa. However, a recent methodological development in functional genomics, the massively parallel reporter assay (MPRA), makes it possible to test thousands to millions of sequences for regulatory activity in a single in vitro experiment. It does so by combining traditional, single-locus episomal reporter assays (e.g., luciferase reporter assays) with the scalability of high-throughput sequencing. In this perspective, we discuss the execution, advantages, and limitations of MPRAs for research in evolutionary biology. We review recent studies that have made use of this approach to address explicitly evolutionary questions, highlighting study designs that we believe are particularly well-positioned to gain from MPRA approaches. Additionally, we propose solutions for extending these powerful assays to rare taxa and those with limited genomic resources. In doing so, we underscore the broad potential of MPRAs to drive genome-scale functional evolutionary genetics studies in non-traditional model organisms.
2405.06648
Georgii Koniukov
Georgii Koniukov
Protein atom traps as seeing by neutron scattering
null
null
null
null
q-bio.BM cond-mat.mes-hall cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The expression for the higher temperature dependence of the mean squared displacement in proteins is obtained. The quantum multi-well model explains the dynamic transitions of the proteins and minimizes the amount of parameters to a single one leading to the few-state harmonic system at low-temperatures, and, thus, justifying the possibility of using proteins as atom traps to control qubits at a bit higher temperatures.
[ { "created": "Mon, 25 Mar 2024 17:52:59 GMT", "version": "v1" } ]
2024-05-14
[ [ "Koniukov", "Georgii", "" ] ]
The expression for the higher temperature dependence of the mean squared displacement in proteins is obtained. The quantum multi-well model explains the dynamic transitions of the proteins and minimizes the amount of parameters to a single one leading to the few-state harmonic system at low-temperatures, and, thus, justifying the possibility of using proteins as atom traps to control qubits at a bit higher temperatures.
2212.01171
Jae Kyoung Kim
Hyukpyo Hong, Bryan S. Hernandez, Jinsu Kim, and Jae Kyoung Kim
Computational translation framework identifies biochemical reaction networks with special topologies and their long-term dynamics
24 pages, 7 figures
null
null
null
q-bio.MN math.PR
http://creativecommons.org/licenses/by/4.0/
Long-term behaviors of biochemical systems are described by steady states in deterministic models and stationary distributions in stochastic models. Obtaining their analytic solutions can be done for limited cases, such as linear or finite-state systems, as it generally requires solving many coupled equations. Interestingly, analytic solutions can be easily obtained when underlying networks have special topologies, called weak reversibility (WR) and zero deficiency (ZD), and the kinetic law follows a generalized form of mass-action kinetics. However, such desired topological conditions do not hold for the majority of cases. Thus, translating networks to have WR and ZD while preserving the original dynamics was proposed. Yet, this approach is limited because manually obtaining the desired network translation among the large number of candidates is challenging. Here, we prove necessary conditions for having WR and ZD after translation, and based on these conditions, we develop a user-friendly computational package, TOWARDZ, that automatically and efficiently identifies translated networks with WR and ZD. This allows us to quantitatively examine how likely it is to obtain WR and ZD after translation depending on the number of species and reactions. Importantly, we also describe how our package can be used to analytically derive steady states of deterministic models and stationary distributions of stochastic models. TOWARDZ provides an effective tool to analyze biochemical systems.
[ { "created": "Fri, 2 Dec 2022 13:51:35 GMT", "version": "v1" } ]
2022-12-05
[ [ "Hong", "Hyukpyo", "" ], [ "Hernandez", "Bryan S.", "" ], [ "Kim", "Jinsu", "" ], [ "Kim", "Jae Kyoung", "" ] ]
Long-term behaviors of biochemical systems are described by steady states in deterministic models and stationary distributions in stochastic models. Obtaining their analytic solutions can be done for limited cases, such as linear or finite-state systems, as it generally requires solving many coupled equations. Interestingly, analytic solutions can be easily obtained when underlying networks have special topologies, called weak reversibility (WR) and zero deficiency (ZD), and the kinetic law follows a generalized form of mass-action kinetics. However, such desired topological conditions do not hold for the majority of cases. Thus, translating networks to have WR and ZD while preserving the original dynamics was proposed. Yet, this approach is limited because manually obtaining the desired network translation among the large number of candidates is challenging. Here, we prove necessary conditions for having WR and ZD after translation, and based on these conditions, we develop a user-friendly computational package, TOWARDZ, that automatically and efficiently identifies translated networks with WR and ZD. This allows us to quantitatively examine how likely it is to obtain WR and ZD after translation depending on the number of species and reactions. Importantly, we also describe how our package can be used to analytically derive steady states of deterministic models and stationary distributions of stochastic models. TOWARDZ provides an effective tool to analyze biochemical systems.
2402.01605
Alireza Alemi
Alireza Alemi, Emre R. F. Aksay, Mark S. Goldman
A Lyapunov theory demonstrating a fundamental limit on the speed of systems consolidation
16 pages, 4 figures
null
null
null
q-bio.NC cs.SY eess.SY physics.bio-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
The nervous system reorganizes memories from an early site to a late site, a commonly observed feature of learning and memory systems known as systems consolidation. Previous work has suggested learning rules by which consolidation may occur. Here, we provide conditions under which such rules are guaranteed to lead to stable convergence of learning and consolidation. We use the theory of Lyapunov functions, which enforces stability by requiring learning rules to decrease an energy-like (Lyapunov) function. We present the theory in the context of a simple circuit architecture motivated by classic models of learning in systems consolidation mediated by the cerebellum. Stability is only guaranteed if the learning rate in the late stage is not faster than the learning rate in the early stage. Further, the slower the learning rate at the late stage, the larger the perturbation the system can tolerate with a guarantee of stability. We provide intuition for this result by mapping the consolidation model to a damped driven oscillator system, and showing that the ratio of early- to late-stage learning rates in the consolidation model can be directly identified with the (square of the) oscillator's damping ratio. This work suggests the power of the Lyapunov approach to provide constraints on nervous system function.
[ { "created": "Fri, 2 Feb 2024 18:07:40 GMT", "version": "v1" } ]
2024-02-05
[ [ "Alemi", "Alireza", "" ], [ "Aksay", "Emre R. F.", "" ], [ "Goldman", "Mark S.", "" ] ]
The nervous system reorganizes memories from an early site to a late site, a commonly observed feature of learning and memory systems known as systems consolidation. Previous work has suggested learning rules by which consolidation may occur. Here, we provide conditions under which such rules are guaranteed to lead to stable convergence of learning and consolidation. We use the theory of Lyapunov functions, which enforces stability by requiring learning rules to decrease an energy-like (Lyapunov) function. We present the theory in the context of a simple circuit architecture motivated by classic models of learning in systems consolidation mediated by the cerebellum. Stability is only guaranteed if the learning rate in the late stage is not faster than the learning rate in the early stage. Further, the slower the learning rate at the late stage, the larger the perturbation the system can tolerate with a guarantee of stability. We provide intuition for this result by mapping the consolidation model to a damped driven oscillator system, and showing that the ratio of early- to late-stage learning rates in the consolidation model can be directly identified with the (square of the) oscillator's damping ratio. This work suggests the power of the Lyapunov approach to provide constraints on nervous system function.
q-bio/0401007
Claudio Parmeggiani
Claudio Parmeggiani
Two hypotheses about natural intelligence
10 pages
null
null
null
q-bio.PE q-bio.NC
null
We assume that the natural intelligence (human, particularly) is equivalent to a large inferring structure, which took shape in the last 400/500 million years. Then two hypotheses, about this structure and its development, are put forward for consideration. The first one concerns the transmission, from one generation to another, of the structure: we propose that this passage is done by direct transfer, mother to children, during pregnancy (maternal download). The second hypothesis regards the structure evolution: now the acquired improvements can be transferred to the descendants, so it is possible to envisage a governed evolutionary process (evolution by improvements).
[ { "created": "Wed, 7 Jan 2004 00:45:39 GMT", "version": "v1" } ]
2007-05-23
[ [ "Parmeggiani", "Claudio", "" ] ]
We assume that the natural intelligence (human, particularly) is equivalent to a large inferring structure, which took shape in the last 400/500 million years. Then two hypotheses, about this structure and its development, are put forward for consideration. The first one concerns the transmission, from one generation to another, of the structure: we propose that this passage is done by direct transfer, mother to children, during pregnancy (maternal download). The second hypothesis regards the structure evolution: now the acquired improvements can be transferred to the descendants, so it is possible to envisage a governed evolutionary process (evolution by improvements).
2312.05876
Michael A. Lomholt
Michael A. Lomholt and Ralf Metzler
Target search on DNA -- effect of coiling
23 pages, 6 figures, some revision
null
null
null
q-bio.BM cond-mat.stat-mech physics.bio-ph q-bio.SC
http://creativecommons.org/licenses/by/4.0/
Some proteins can find their targets on DNA faster than by pure diffusion in the three-dimensional cytoplasm, through the process of facilitated diffusion: They can loosely bind to DNA and temporarily slide along it, thus being guided by the DNA molecule itself to the target. This chapter examines this process in mathematical detail with a focus on including the effect of DNA coiling on the search process.
[ { "created": "Sun, 10 Dec 2023 13:08:07 GMT", "version": "v1" }, { "created": "Mon, 18 Dec 2023 12:52:15 GMT", "version": "v2" }, { "created": "Sun, 25 Feb 2024 15:29:27 GMT", "version": "v3" } ]
2024-02-27
[ [ "Lomholt", "Michael A.", "" ], [ "Metzler", "Ralf", "" ] ]
Some proteins can find their targets on DNA faster than by pure diffusion in the three-dimensional cytoplasm, through the process of facilitated diffusion: They can loosely bind to DNA and temporarily slide along it, thus being guided by the DNA molecule itself to the target. This chapter examines this process in mathematical detail with a focus on including the effect of DNA coiling on the search process.
1512.02577
Andreas Daffertshofer
Robert Ton and Andreas Daffertshofer
Model selection for identifying power-law scaling
null
null
null
null
q-bio.QM physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Long-range temporal and spatial correlations have been reported in a remarkable number of studies. In particular power-law scaling in neural activity raised considerable interest. We here provide a straightforward algorithm not only to quantify power-law scaling but to test it against alternatives using (Bayesian) model comparison. Our algorithm builds on the well-established detrended fluctuation analysis (DFA). After removing trends of a signal, we determine its mean squared fluctuations in consecutive intervals. In contrast to DFA we use the values per interval to approximate the distribution of these mean squared fluctuations. This allows for estimating the corresponding log-likelihood as a function of interval size without presuming the fluctuations to be normally distributed, as is the case in conventional DFA. We demonstrate the validity and robustness of our algorithm using a variety of simulated signals, ranging from scale-free fluctuations with known Hurst exponents, via more conventional dynamical systems resembling exponentially correlated fluctuations, to a toy model of neural mass activity. We also illustrate its use for encephalographic signals. We further discuss confounding factors like the finite signal size. Our model comparison provides a proper means to identify power-law scaling including the range over which it is present.
[ { "created": "Tue, 8 Dec 2015 18:32:10 GMT", "version": "v1" } ]
2015-12-09
[ [ "Ton", "Robert", "" ], [ "Daffertshofer", "Andreas", "" ] ]
Long-range temporal and spatial correlations have been reported in a remarkable number of studies. In particular power-law scaling in neural activity raised considerable interest. We here provide a straightforward algorithm not only to quantify power-law scaling but to test it against alternatives using (Bayesian) model comparison. Our algorithm builds on the well-established detrended fluctuation analysis (DFA). After removing trends of a signal, we determine its mean squared fluctuations in consecutive intervals. In contrast to DFA we use the values per interval to approximate the distribution of these mean squared fluctuations. This allows for estimating the corresponding log-likelihood as a function of interval size without presuming the fluctuations to be normally distributed, as is the case in conventional DFA. We demonstrate the validity and robustness of our algorithm using a variety of simulated signals, ranging from scale-free fluctuations with known Hurst exponents, via more conventional dynamical systems resembling exponentially correlated fluctuations, to a toy model of neural mass activity. We also illustrate its use for encephalographic signals. We further discuss confounding factors like the finite signal size. Our model comparison provides a proper means to identify power-law scaling including the range over which it is present.
q-bio/0310041
Patrick Warren
Patrick B. Warren and Pieter Rein ten Wolde
Enhancement of the stability of genetic switches by overlapping upstream regulatory domains
4 pages, 5 figures, RevTeX4
null
10.1103/PhysRevLett.92.128101
null
q-bio.MN
null
We study genetic switches formed from pairs of mutually repressing operons. The switch stability is characterised by a well defined lifetime which grows sub-exponentially with the number of copies of the most-expressed transcription factor, in the regime accessible by our numerical simulations. The stability can be markedly enhanced by a suitable choice of overlap between the upstream regulatory domains. Our results suggest that robustness against biochemical noise can provide a selection pressure that drives operons, that regulate each other, together in the course of evolution.
[ { "created": "Sat, 1 Nov 2003 12:37:37 GMT", "version": "v1" } ]
2009-11-10
[ [ "Warren", "Patrick B.", "" ], [ "Wolde", "Pieter Rein ten", "" ] ]
We study genetic switches formed from pairs of mutually repressing operons. The switch stability is characterised by a well defined lifetime which grows sub-exponentially with the number of copies of the most-expressed transcription factor, in the regime accessible by our numerical simulations. The stability can be markedly enhanced by a suitable choice of overlap between the upstream regulatory domains. Our results suggest that robustness against biochemical noise can provide a selection pressure that drives operons, that regulate each other, together in the course of evolution.
1512.04184
Shyr-Shea Chang
Shyr-Shea Chang, Shenyinying Tu, Kyung In Baek, Andrew Pietersen, Yu-Hsiu Liu, Van Savage, Sheng-Ping L. Hwang, Tzung K. Hsiai, Marcus Roper
Optimal occlusion uniformly partitions red blood cells fluxes within a microvascular network
22 pages, 6 figures, 1 supporting information
Chang, S. S., Tu, S., Baek, K. I., Pietersen, A., Liu, Y. H., Savage, V. M., ... & Roper, M. (2017). Optimal occlusion uniformly partitions red blood cells fluxes within a microvascular network. PLoS computational biology, 13(12), e1005892
null
null
q-bio.TO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In animals, gas exchange between blood and tissues occurs in narrow vessels, whose diameter is comparable to that of a red blood cell. Red blood cells must deform to squeeze through these narrow vessels, transiently blocking or occluding the vessels they pass through. Although the dynamics of vessel occlusion have been studied extensively, it remains an open question why microvessels need to be so narrow. We study occlusive dynamics within a model microvascular network: the embryonic zebrafish trunk. We show that pressure feedbacks created when red blood cells enter the finest vessels of the trunk act together to uniformly partition red blood cells through the microvasculature. Using mathematical models as well as direct observation, we show that these occlusive feedbacks are tuned throughout the trunk network to prevent the vessels closest to the heart from short-circuiting the network. Thus occlusion is linked with another open question of microvascular function: how are red blood cells delivered at the same rate to each micro-vessel? Our analysis shows that tuning of occlusive feedbacks increase the total dissipation within the network by a factor of 11, showing that uniformity of flows rather than minimization of transport costs may be prioritized by the microvascular network.
[ { "created": "Mon, 14 Dec 2015 06:06:02 GMT", "version": "v1" }, { "created": "Wed, 23 Nov 2016 06:57:35 GMT", "version": "v2" }, { "created": "Fri, 13 Jul 2018 02:19:53 GMT", "version": "v3" } ]
2018-07-16
[ [ "Chang", "Shyr-Shea", "" ], [ "Tu", "Shenyinying", "" ], [ "Baek", "Kyung In", "" ], [ "Pietersen", "Andrew", "" ], [ "Liu", "Yu-Hsiu", "" ], [ "Savage", "Van", "" ], [ "Hwang", "Sheng-Ping L.", "" ], [ "Hsiai", "Tzung K.", "" ], [ "Roper", "Marcus", "" ] ]
In animals, gas exchange between blood and tissues occurs in narrow vessels, whose diameter is comparable to that of a red blood cell. Red blood cells must deform to squeeze through these narrow vessels, transiently blocking or occluding the vessels they pass through. Although the dynamics of vessel occlusion have been studied extensively, it remains an open question why microvessels need to be so narrow. We study occlusive dynamics within a model microvascular network: the embryonic zebrafish trunk. We show that pressure feedbacks created when red blood cells enter the finest vessels of the trunk act together to uniformly partition red blood cells through the microvasculature. Using mathematical models as well as direct observation, we show that these occlusive feedbacks are tuned throughout the trunk network to prevent the vessels closest to the heart from short-circuiting the network. Thus occlusion is linked with another open question of microvascular function: how are red blood cells delivered at the same rate to each micro-vessel? Our analysis shows that tuning of occlusive feedbacks increase the total dissipation within the network by a factor of 11, showing that uniformity of flows rather than minimization of transport costs may be prioritized by the microvascular network.
0911.4014
Branko Dragovich
Branko Dragovich
Genetic Code and Number Theory
15 pages. To apper in "Modern Topics in Science", a book of invited papers (Eds. R. Constantinescu, G. Djordjevic, Lj. Nesic)
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Living organisms are the most complex, interesting and significant objects regarding all substructures of the universe. Life science is regarded as a science of the 21st century and one can expect great new discoveries in the near futures. This article contains an introductory brief review of genetic information, its coding and translation of genes to proteins through the genetic code. Some theoretical approaches to the modelling of the genetic code are presented. In particular, connection of the genetic code with number theory is considered and the role of $p$-adic numbers is underlined.
[ { "created": "Fri, 20 Nov 2009 11:09:03 GMT", "version": "v1" } ]
2009-11-23
[ [ "Dragovich", "Branko", "" ] ]
Living organisms are the most complex, interesting and significant objects regarding all substructures of the universe. Life science is regarded as a science of the 21st century and one can expect great new discoveries in the near futures. This article contains an introductory brief review of genetic information, its coding and translation of genes to proteins through the genetic code. Some theoretical approaches to the modelling of the genetic code are presented. In particular, connection of the genetic code with number theory is considered and the role of $p$-adic numbers is underlined.
2404.06691
Ningfeng Liu
Ningfeng Liu (1 and 2), Jie Yu (1), Siyu Xiu (1), Xinfang Zhao (1), Siyu Lin (1), Bo Qiang (1), Ruqiu Zheng (1), Hongwei Jin (1), Liangren Zhang (1), Zhenming Liu (1 and 3) ((1) State Key Laboratory of Natural and Biomimetic Drugs, School of Pharmaceutical Sciences, Peking University, (2) Peking-Tsinghua Center for Life Science (CLS), Peking University, (3) State Key Laboratory of Pharmaceutical Biotechnology, Nanjing University)
Latent Chemical Space Searching for Plug-in Multi-objective Molecule Generation
null
null
null
null
q-bio.BM cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Molecular generation, an essential method for identifying new drug structures, has been supported by advancements in machine learning and computational technology. However, challenges remain in multi-objective generation, model adaptability, and practical application in drug discovery. In this study, we developed a versatile 'plug-in' molecular generation model that incorporates multiple objectives related to target affinity, drug-likeness, and synthesizability, facilitating its application in various drug development contexts. We improved the Particle Swarm Optimization (PSO) in the context of drug discoveries, and identified PSO-ENP as the optimal variant for multi-objective molecular generation and optimization through comparative experiments. The model also incorporates a novel target-ligand affinity predictor, enhancing the model's utility by supporting three-dimensional information and improving synthetic feasibility. Case studies focused on generating and optimizing drug-like big marine natural products were performed, underscoring PSO-ENP's effectiveness and demonstrating its considerable potential for practical drug discovery applications.
[ { "created": "Wed, 10 Apr 2024 02:37:24 GMT", "version": "v1" } ]
2024-04-11
[ [ "Liu", "Ningfeng", "", "1 and 2" ], [ "Yu", "Jie", "", "1 and 3" ], [ "Xiu", "Siyu", "", "1 and 3" ], [ "Zhao", "Xinfang", "", "1 and 3" ], [ "Lin", "Siyu", "", "1 and 3" ], [ "Qiang", "Bo", "", "1 and 3" ], [ "Zheng", "Ruqiu", "", "1 and 3" ], [ "Jin", "Hongwei", "", "1 and 3" ], [ "Zhang", "Liangren", "", "1 and 3" ], [ "Liu", "Zhenming", "", "1 and 3" ] ]
Molecular generation, an essential method for identifying new drug structures, has been supported by advancements in machine learning and computational technology. However, challenges remain in multi-objective generation, model adaptability, and practical application in drug discovery. In this study, we developed a versatile 'plug-in' molecular generation model that incorporates multiple objectives related to target affinity, drug-likeness, and synthesizability, facilitating its application in various drug development contexts. We improved the Particle Swarm Optimization (PSO) in the context of drug discoveries, and identified PSO-ENP as the optimal variant for multi-objective molecular generation and optimization through comparative experiments. The model also incorporates a novel target-ligand affinity predictor, enhancing the model's utility by supporting three-dimensional information and improving synthetic feasibility. Case studies focused on generating and optimizing drug-like big marine natural products were performed, underscoring PSO-ENP's effectiveness and demonstrating its considerable potential for practical drug discovery applications.
2311.01543
Yujiang Wang
Bethany Little, Carly Flowers, Andrew Blamire, Peter Thelwall, John-Paul Taylor, Peter Gallagher, David Andrew Cousins, Yujiang Wang
Multivariate brain-cognition associations in euthymic bipolar disorder
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: People with bipolar disorder (BD) tend to show widespread cognitive impairment compared to healthy controls. Impairments in processing speed (PS), attention, and executive function (EF) may represent 'core' impairments that have a role in wider cognitive dysfunction. Cognitive impairments appear to relate to structural brain abnormalities in BD, but whether core deficits are related to particular brain regions is unclear and much of the research on brain-cognition associations is limited by univariate analysis and small samples. Methods: Euthymic BD patients (n=56) and matched healthy controls (n=26) underwent T1-weighted MRI scans and completed neuropsychological tests of PS, attention, and EF. We utilised public datasets to develop a normative model of cortical thickness (n=5,977) to generate robust estimations of cortical abnormalities in patients. Canonical correlation analysis was used to assess multivariate brain-cognition associations in BD, controlling for age, sex, and premorbid IQ. Results: BD showed impairments on tests of PS, attention, and EF, and abnormal cortical thickness in several brain regions compared to healthy controls. Impairments in tests of PS and EF were most strongly associated with cortical thickness in left inferior temporal, right entorhinal, and right temporal pole areas. Conclusion: Impairments in PS, attention, and EF can be observed in euthymic BD and may be related to abnormal cortical thickness in temporal regions. Future research should continue to leverage multivariate methods to examine complex brain-cognition associations in BD. Future research may benefit from exploring covariance between traditional brain structural morphological metrics such as cortical thickness, cortical volume, and surface area.
[ { "created": "Thu, 2 Nov 2023 18:54:27 GMT", "version": "v1" } ]
2023-11-06
[ [ "Little", "Bethany", "" ], [ "Flowers", "Carly", "" ], [ "Blamire", "Andrew", "" ], [ "Thelwall", "Peter", "" ], [ "Taylor", "John-Paul", "" ], [ "Gallagher", "Peter", "" ], [ "Cousins", "David Andrew", "" ], [ "Wang", "Yujiang", "" ] ]
Background: People with bipolar disorder (BD) tend to show widespread cognitive impairment compared to healthy controls. Impairments in processing speed (PS), attention, and executive function (EF) may represent 'core' impairments that have a role in wider cognitive dysfunction. Cognitive impairments appear to relate to structural brain abnormalities in BD, but whether core deficits are related to particular brain regions is unclear and much of the research on brain-cognition associations is limited by univariate analysis and small samples. Methods: Euthymic BD patients (n=56) and matched healthy controls (n=26) underwent T1-weighted MRI scans and completed neuropsychological tests of PS, attention, and EF. We utilised public datasets to develop a normative model of cortical thickness (n=5,977) to generate robust estimations of cortical abnormalities in patients. Canonical correlation analysis was used to assess multivariate brain-cognition associations in BD, controlling for age, sex, and premorbid IQ. Results: BD showed impairments on tests of PS, attention, and EF, and abnormal cortical thickness in several brain regions compared to healthy controls. Impairments in tests of PS and EF were most strongly associated with cortical thickness in left inferior temporal, right entorhinal, and right temporal pole areas. Conclusion: Impairments in PS, attention, and EF can be observed in euthymic BD and may be related to abnormal cortical thickness in temporal regions. Future research should continue to leverage multivariate methods to examine complex brain-cognition associations in BD. Future research may benefit from exploring covariance between traditional brain structural morphological metrics such as cortical thickness, cortical volume, and surface area.