id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1405.0929
Thomas Hopf
Thomas A. Hopf, Charlotta P.I. Sch\"arfe, Jo\~ao P.G.L.M. Rodrigues, Anna G. Green, Chris Sander, Alexandre M.J.J. Bonvin, Debora S. Marks
Sequence co-evolution gives 3D contacts and structures of protein complexes
null
null
10.7554/eLife.03430
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein-protein interactions are fundamental to many biological processes. Experimental screens have identified tens of thousands of interactions and structural biology has provided detailed functional insight for select 3D protein complexes. An alternative rich source of information about protein interactions is the evolutionary sequence record. Building on earlier work, we show that analysis of correlated evolutionary sequence changes across proteins identifies residues that are close in space with sufficient accuracy to determine the three-dimensional structure of the protein complexes. We evaluate prediction performance in blinded tests on 76 complexes of known 3D structure, predict protein-protein contacts in 32 complexes of unknown structure, and demonstrate how evolutionary couplings can be used to distinguish between interacting and non-interacting protein pairs in a large complex. With the current growth of sequence databases, we expect that the method can be generalized to genome-wide elucidation of protein-protein interaction networks and used for interaction predictions at residue resolution.
[ { "created": "Mon, 5 May 2014 15:30:27 GMT", "version": "v1" }, { "created": "Fri, 23 May 2014 20:49:25 GMT", "version": "v2" }, { "created": "Tue, 16 Sep 2014 15:10:32 GMT", "version": "v3" } ]
2016-08-11
[ [ "Hopf", "Thomas A.", "" ], [ "Schärfe", "Charlotta P. I.", "" ], [ "Rodrigues", "João P. G. L. M.", "" ], [ "Green", "Anna G.", "" ], [ "Sander", "Chris", "" ], [ "Bonvin", "Alexandre M. J. J.", "" ], [ "Marks", "Debora S.", "" ] ]
Protein-protein interactions are fundamental to many biological processes. Experimental screens have identified tens of thousands of interactions and structural biology has provided detailed functional insight for select 3D protein complexes. An alternative rich source of information about protein interactions is the evolutionary sequence record. Building on earlier work, we show that analysis of correlated evolutionary sequence changes across proteins identifies residues that are close in space with sufficient accuracy to determine the three-dimensional structure of the protein complexes. We evaluate prediction performance in blinded tests on 76 complexes of known 3D structure, predict protein-protein contacts in 32 complexes of unknown structure, and demonstrate how evolutionary couplings can be used to distinguish between interacting and non-interacting protein pairs in a large complex. With the current growth of sequence databases, we expect that the method can be generalized to genome-wide elucidation of protein-protein interaction networks and used for interaction predictions at residue resolution.
1501.04874
Alexander K. Vidybida
A. K. Vidybida
Output stream of leaky integrate and fire neuron
11 pages, 1 Figure. Language is still Ukrainian. This is an extended version of the previous version. The proofs of statements are added as well as 1 Figure
Reports of the National Academy of Science of Ukraine, 2014, 12, pp. 18-23
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Probability density function of output interspike intervals is found in exact form for leaky integrate and fire neuron stimulated with Poisson stream. The diffusion approximation is not exploited.
[ { "created": "Tue, 20 Jan 2015 16:42:08 GMT", "version": "v1" }, { "created": "Wed, 15 Apr 2015 13:29:10 GMT", "version": "v2" }, { "created": "Tue, 29 Dec 2015 12:04:03 GMT", "version": "v3" } ]
2015-12-31
[ [ "Vidybida", "A. K.", "" ] ]
Probability density function of output interspike intervals is found in exact form for leaky integrate and fire neuron stimulated with Poisson stream. The diffusion approximation is not exploited.
1412.2447
David Krakauer
David Krakauer, Nils Bertschinger, Eckehard Olbrich, Nihat Ay, Jessica C. Flack
The Information Theory of Individuality
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider biological individuality in terms of information theoretic and graphical principles. Our purpose is to extract through an algorithmic decomposition system-environment boundaries supporting individuality. We infer or detect evolved individuals rather than assume that they exist. Given a set of consistent measurements over time, we discover a coarse-grained or quantized description on a system, inducing partitions (which can be nested). Legitimate individual partitions will propagate information from the past into the future, whereas spurious aggregations will not. Individuals are therefore defined in terms of ongoing, bounded information processing units rather than lists of static features or conventional replication-based definitions which tend to fail in the case of cultural change. One virtue of this approach is that it could expand the scope of what we consider adaptive or biological phenomena, particularly in the microscopic and macroscopic regimes of molecular and social phenomena.
[ { "created": "Mon, 8 Dec 2014 04:53:00 GMT", "version": "v1" } ]
2014-12-09
[ [ "Krakauer", "David", "" ], [ "Bertschinger", "Nils", "" ], [ "Olbrich", "Eckehard", "" ], [ "Ay", "Nihat", "" ], [ "Flack", "Jessica C.", "" ] ]
We consider biological individuality in terms of information theoretic and graphical principles. Our purpose is to extract through an algorithmic decomposition system-environment boundaries supporting individuality. We infer or detect evolved individuals rather than assume that they exist. Given a set of consistent measurements over time, we discover a coarse-grained or quantized description on a system, inducing partitions (which can be nested). Legitimate individual partitions will propagate information from the past into the future, whereas spurious aggregations will not. Individuals are therefore defined in terms of ongoing, bounded information processing units rather than lists of static features or conventional replication-based definitions which tend to fail in the case of cultural change. One virtue of this approach is that it could expand the scope of what we consider adaptive or biological phenomena, particularly in the microscopic and macroscopic regimes of molecular and social phenomena.
1202.4378
Joachim Krug
Ivan G. Szendro, Martijn F. Schenk, Jasper Franke, Joachim Krug and J. Arjan G. M. de Visser
Quantitative analyses of empirical fitness landscapes
24 pages, 5 figures; to appear in Journal of Statistical Mechanics: Theory and Experiment
J. Stat. Mech. P01005 (2013)
10.1088/1742-5468/2013/01/P01005
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The concept of a fitness landscape is a powerful metaphor that offers insight into various aspects of evolutionary processes and guidance for the study of evolution. Until recently, empirical evidence on the ruggedness of these landscapes was lacking, but since it became feasible to construct all possible genotypes containing combinations of a limited set of mutations, the number of studies has grown to a point where a classification of landscapes becomes possible. The aim of this review is to identify measures of epistasis that allow a meaningful comparison of fitness landscapes and then apply them to the empirical landscapes to discern factors that affect ruggedness. The various measures of epistasis that have been proposed in the literature appear to be equivalent. Our comparison shows that the ruggedness of the empirical landscape is affected by whether the included mutations are beneficial or deleterious and by whether intra- or intergenic epistasis is involved. Finally, the empirical landscapes are compared to landscapes generated with the Rough Mt.\ Fuji model. Despite the simplicity of this model, it captures the features of the experimental landscapes remarkably well.
[ { "created": "Mon, 20 Feb 2012 16:55:32 GMT", "version": "v1" }, { "created": "Wed, 17 Oct 2012 17:04:21 GMT", "version": "v2" } ]
2013-01-18
[ [ "Szendro", "Ivan G.", "" ], [ "Schenk", "Martijn F.", "" ], [ "Franke", "Jasper", "" ], [ "Krug", "Joachim", "" ], [ "de Visser", "J. Arjan G. M.", "" ] ]
The concept of a fitness landscape is a powerful metaphor that offers insight into various aspects of evolutionary processes and guidance for the study of evolution. Until recently, empirical evidence on the ruggedness of these landscapes was lacking, but since it became feasible to construct all possible genotypes containing combinations of a limited set of mutations, the number of studies has grown to a point where a classification of landscapes becomes possible. The aim of this review is to identify measures of epistasis that allow a meaningful comparison of fitness landscapes and then apply them to the empirical landscapes to discern factors that affect ruggedness. The various measures of epistasis that have been proposed in the literature appear to be equivalent. Our comparison shows that the ruggedness of the empirical landscape is affected by whether the included mutations are beneficial or deleterious and by whether intra- or intergenic epistasis is involved. Finally, the empirical landscapes are compared to landscapes generated with the Rough Mt.\ Fuji model. Despite the simplicity of this model, it captures the features of the experimental landscapes remarkably well.
2002.03821
Thomas G\"otz
Thomas G\"otz
First attempts to model the dynamics of the Coronavirus outbreak 2020
8 pages, 7 gigures
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since the end of 2019 an outbreak of a new strain of coronavirus, called 2019--nCoV, is reported from China and later other parts of the world. Since January 21, WHO reports daily data on confirmed cases and deaths from both China and other countries. In this work we present some discrete and continuous models to discribe the disease dynamics in China and estimate the needed epidemiological parameters. Good agreement with the current dynamics has be found for both a discrete transmission model and a slightly modified SIR-model.
[ { "created": "Mon, 10 Feb 2020 14:48:04 GMT", "version": "v1" } ]
2020-02-11
[ [ "Götz", "Thomas", "" ] ]
Since the end of 2019 an outbreak of a new strain of coronavirus, called 2019--nCoV, is reported from China and later other parts of the world. Since January 21, WHO reports daily data on confirmed cases and deaths from both China and other countries. In this work we present some discrete and continuous models to discribe the disease dynamics in China and estimate the needed epidemiological parameters. Good agreement with the current dynamics has be found for both a discrete transmission model and a slightly modified SIR-model.
1903.09542
Manuel Baltieri Mr
Manuel Baltieri and Christopher L. Buckley
Nonmodular architectures of cognitive systems based on active inference
Accepted at IJCNN 2019
null
10.1109/IJCNN.2019.8852048
null
q-bio.NC cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In psychology and neuroscience it is common to describe cognitive systems as input/output devices where perceptual and motor functions are implemented in a purely feedforward, open-loop fashion. On this view, perception and action are often seen as encapsulated modules with limited interaction between them. While embodied and enactive approaches to cognitive science have challenged the idealisation of the brain as an input/output device, we argue that even the more recent attempts to model systems using closed-loop architectures still heavily rely on a strong separation between motor and perceptual functions. Previously, we have suggested that the mainstream notion of modularity strongly resonates with the separation principle of control theory. In this work we present a minimal model of a sensorimotor loop implementing an architecture based on the separation principle. We link this to popular formulations of perception and action in the cognitive sciences, and show its limitations when, for instance, external forces are not modelled by an agent. These forces can be seen as variables that an agent cannot directly control, i.e., a perturbation from the environment or an interference caused by other agents. As an alternative approach inspired by embodied cognitive science, we then propose a nonmodular architecture based on the active inference framework. We demonstrate the robustness of this architecture to unknown external inputs and show that the mechanism with which this is achieved in linear models is equivalent to integral control.
[ { "created": "Fri, 22 Mar 2019 15:00:25 GMT", "version": "v1" } ]
2022-03-10
[ [ "Baltieri", "Manuel", "" ], [ "Buckley", "Christopher L.", "" ] ]
In psychology and neuroscience it is common to describe cognitive systems as input/output devices where perceptual and motor functions are implemented in a purely feedforward, open-loop fashion. On this view, perception and action are often seen as encapsulated modules with limited interaction between them. While embodied and enactive approaches to cognitive science have challenged the idealisation of the brain as an input/output device, we argue that even the more recent attempts to model systems using closed-loop architectures still heavily rely on a strong separation between motor and perceptual functions. Previously, we have suggested that the mainstream notion of modularity strongly resonates with the separation principle of control theory. In this work we present a minimal model of a sensorimotor loop implementing an architecture based on the separation principle. We link this to popular formulations of perception and action in the cognitive sciences, and show its limitations when, for instance, external forces are not modelled by an agent. These forces can be seen as variables that an agent cannot directly control, i.e., a perturbation from the environment or an interference caused by other agents. As an alternative approach inspired by embodied cognitive science, we then propose a nonmodular architecture based on the active inference framework. We demonstrate the robustness of this architecture to unknown external inputs and show that the mechanism with which this is achieved in linear models is equivalent to integral control.
1802.00023
Siby Abraham
Jyotshna Dongardivev, Siby Abraham
Reaching Optimized Parameter Set, Protein Secondary Structure Prediction Using Neural Network
22 pages, 9 figures, 28 tables
Neural Computing and Applications (2017)28,1947 to 1974
10.1007/s00521-015-2150-2 10.1007/s00521-015-2150-2 10.1007/s00521-015-2150-2
null
q-bio.BM q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose an optimized parameter set for protein secondary structure prediction using three layer feed forward back propagation neural network. The methodology uses four parameters viz. encoding scheme, window size, number of neurons in the hidden layer and type of learning algorithm. The input layer of the network consists of neurons changing from 3 to 19, corresponding to different window sizes. The hidden layer chooses a natural number from 1 to 20 as the number of neurons. The output layer consists of three neurons, each corresponding to known secondary structural classes viz. alpha helix, beta strands and coils respectively. It also uses eight different learning algorithms and nine encoding schemes. Exhaustive experiments were performed using non-homologues dataset. The experimental results were compared using performance measures like Q3, sensitivity, specificity, Mathew correlation coefficient and accuracy. The paper also discusses the process of obtaining a stabilized cluster of 2530 records from a collection of 11340 records. The graphs of these stabilized clusters of records with respect to accuracy are concave, convergence is monotonic increasing and rate of convergence is uniform. The paper gives BLOSUM62 as the encoding scheme, 19 as the window size, 19 as the number of neurons in the hidden layer and One- Step Secant as the learning algorithm with the highest accuracy of 78%. These parameter values are proposed as the optimized parameter set for the three layer feed forward back propagation neural network for the protein secondary structure predictionv
[ { "created": "Wed, 31 Jan 2018 19:14:38 GMT", "version": "v1" } ]
2018-02-02
[ [ "Dongardivev", "Jyotshna", "" ], [ "Abraham", "Siby", "" ] ]
We propose an optimized parameter set for protein secondary structure prediction using three layer feed forward back propagation neural network. The methodology uses four parameters viz. encoding scheme, window size, number of neurons in the hidden layer and type of learning algorithm. The input layer of the network consists of neurons changing from 3 to 19, corresponding to different window sizes. The hidden layer chooses a natural number from 1 to 20 as the number of neurons. The output layer consists of three neurons, each corresponding to known secondary structural classes viz. alpha helix, beta strands and coils respectively. It also uses eight different learning algorithms and nine encoding schemes. Exhaustive experiments were performed using non-homologues dataset. The experimental results were compared using performance measures like Q3, sensitivity, specificity, Mathew correlation coefficient and accuracy. The paper also discusses the process of obtaining a stabilized cluster of 2530 records from a collection of 11340 records. The graphs of these stabilized clusters of records with respect to accuracy are concave, convergence is monotonic increasing and rate of convergence is uniform. The paper gives BLOSUM62 as the encoding scheme, 19 as the window size, 19 as the number of neurons in the hidden layer and One- Step Secant as the learning algorithm with the highest accuracy of 78%. These parameter values are proposed as the optimized parameter set for the three layer feed forward back propagation neural network for the protein secondary structure predictionv
2002.03173
Chengxin Zhang
Chengxin Zhang, Wei Zheng, Xiaoqiang Huang, Eric W. Bell, Xiaogen Zhou, Yang Zhang
Protein structure and sequence re-analysis of 2019-nCoV genome does not indicate snakes as its intermediate host or the unique similarity between its spike protein insertions and HIV-1
Structure models for 2019-nCoV proteins are available at https://zhanglab.ccmb.med.umich.edu/C-I-TASSER/2019-nCov/
J. Proteome Res. 2020, 19, 4, 1351-1360
10.1021/acs.jproteome.0c00129
null
q-bio.GN q-bio.BM
http://creativecommons.org/licenses/by/4.0/
As the infection of 2019-nCoV coronavirus is quickly developing into a global pneumonia epidemic, careful analysis of its transmission and cellular mechanisms is sorely needed. In this report, we re-analyzed the computational approaches and findings presented in two recent manuscripts by Ji et al. (https://doi.org/10.1002/jmv.25682) and by Pradhan et al. (https://doi.org/10.1101/2020.01.30.927871), which concluded that snakes are the intermediate hosts of 2019-nCoV and that the 2019-nCoV spike protein insertions shared a unique similarity to HIV-1. Results from our re-implementation of the analyses, built on larger-scale datasets using state-of-the-art bioinformatics methods and databases, do not support the conclusions proposed by these manuscripts. Based on our analyses and existing data of coronaviruses, we concluded that the intermediate hosts of 2019-nCoV are more likely to be mammals and birds than snakes, and that the "novel insertions" observed in the spike protein are naturally evolved from bat coronaviruses.
[ { "created": "Sat, 8 Feb 2020 14:24:59 GMT", "version": "v1" } ]
2020-04-28
[ [ "Zhang", "Chengxin", "" ], [ "Zheng", "Wei", "" ], [ "Huang", "Xiaoqiang", "" ], [ "Bell", "Eric W.", "" ], [ "Zhou", "Xiaogen", "" ], [ "Zhang", "Yang", "" ] ]
As the infection of 2019-nCoV coronavirus is quickly developing into a global pneumonia epidemic, careful analysis of its transmission and cellular mechanisms is sorely needed. In this report, we re-analyzed the computational approaches and findings presented in two recent manuscripts by Ji et al. (https://doi.org/10.1002/jmv.25682) and by Pradhan et al. (https://doi.org/10.1101/2020.01.30.927871), which concluded that snakes are the intermediate hosts of 2019-nCoV and that the 2019-nCoV spike protein insertions shared a unique similarity to HIV-1. Results from our re-implementation of the analyses, built on larger-scale datasets using state-of-the-art bioinformatics methods and databases, do not support the conclusions proposed by these manuscripts. Based on our analyses and existing data of coronaviruses, we concluded that the intermediate hosts of 2019-nCoV are more likely to be mammals and birds than snakes, and that the "novel insertions" observed in the spike protein are naturally evolved from bat coronaviruses.
2310.11247
Francoise Lecaignard
Francoise Lecaignard and Jeremie Mattout
Mismatch Negativity: time for deconstruction
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Error signals are the cornerstone of predictive coding and are widely considered essential to sensory perception and beyond. The mismatch negativity (MMN) is arguably the most emblematic and most studied brain error signal. It is affected in many brain disorders. However, its precise algorithmic function and the underlying physiology remain mysterious. Over the past decade, theoretical and computational explanations have been put forward. They highlight a paradox: the MMN is considered a signature of context-dependent perceptual learning, although it is defined as an evoked response averaged across trials, thus neglecting the information carried by error signal fluctuations over time. We propose to deconstruct the MMN, by virtue of hypothesis driven computational approaches whose aim it to account for these fluctuations.
[ { "created": "Tue, 17 Oct 2023 13:15:42 GMT", "version": "v1" }, { "created": "Mon, 15 Jan 2024 10:38:19 GMT", "version": "v2" }, { "created": "Tue, 6 Feb 2024 20:57:51 GMT", "version": "v3" } ]
2024-02-08
[ [ "Lecaignard", "Francoise", "" ], [ "Mattout", "Jeremie", "" ] ]
Error signals are the cornerstone of predictive coding and are widely considered essential to sensory perception and beyond. The mismatch negativity (MMN) is arguably the most emblematic and most studied brain error signal. It is affected in many brain disorders. However, its precise algorithmic function and the underlying physiology remain mysterious. Over the past decade, theoretical and computational explanations have been put forward. They highlight a paradox: the MMN is considered a signature of context-dependent perceptual learning, although it is defined as an evoked response averaged across trials, thus neglecting the information carried by error signal fluctuations over time. We propose to deconstruct the MMN, by virtue of hypothesis driven computational approaches whose aim it to account for these fluctuations.
2303.11099
Thomas Wahl
Thomas Wahl, Michel Duprez, Axel Hutt
Closed-loop neurostimulation in real-time for the treatment of pathological brain rhythms
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Mental disorders may exhibit pathological brain rhythms and neurostimulation promises to alleviate of patients' symptoms by modifying these rhythms. Today, most neurostimulation schemes are open-loop, i.e. administer experimental stimulation protocols independent of the patients brain activity which may yield a sub-optimal treatment. We propose a closed-loop feedback control scheme estimating an optimal stimulation based on observed brain activity. The optimal stimulation is chosen according to a user-defined target frequency distribution, which permits frequency tuning of the brain activity in real-time. The mathematical description details the major control elements and applications to biologically realistic simulated brain activity illustrate the scheme's possible power in medical practice. Clinical relevance - The proposed neurostimulation control theme promises to permit the medical personnel to tune a patient's brain activity in real-time.
[ { "created": "Mon, 20 Mar 2023 13:34:22 GMT", "version": "v1" } ]
2023-03-21
[ [ "Wahl", "Thomas", "" ], [ "Duprez", "Michel", "" ], [ "Hutt", "Axel", "" ] ]
Mental disorders may exhibit pathological brain rhythms and neurostimulation promises to alleviate of patients' symptoms by modifying these rhythms. Today, most neurostimulation schemes are open-loop, i.e. administer experimental stimulation protocols independent of the patients brain activity which may yield a sub-optimal treatment. We propose a closed-loop feedback control scheme estimating an optimal stimulation based on observed brain activity. The optimal stimulation is chosen according to a user-defined target frequency distribution, which permits frequency tuning of the brain activity in real-time. The mathematical description details the major control elements and applications to biologically realistic simulated brain activity illustrate the scheme's possible power in medical practice. Clinical relevance - The proposed neurostimulation control theme promises to permit the medical personnel to tune a patient's brain activity in real-time.
2002.07716
Bartosz Jura
Bartosz Jura
Synaptic clock as a neural substrate of consciousness
16 pages, 4 figures; added references, extended discussion of the time consciousness literature
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this theoretical work the temporal aspect of consciousness is analyzed. We start from the notion that while conscious experience seems to change constantly, yet for any of its contents to be consciously perceived they must last for some non-zero duration of time, which appears to constitute certain conflict. We posit that, in terms of phenomenological analysis of consciousness, the temporal aspect, and this apparent conflict in particular, might be the most basic property, likely inherent to any conceivable form of consciousness. It is then outlined how taking this perspective offers a concrete way of relating the properties of consciousness directly to the neural plasticity mechanisms of learning and memory, and specifying how exactly subjective experience might be related to processes of information integration. In particular, we propose synaptic clock to constitute a content-specific neural substrate of consciousness, explaining how it would correspond to this temporal aspect. Then, we propose a viewpoint, in which moments of subjective time have different durations, depending on the type of information processed, proportional to the time units of corresponding synaptic clocks, and being in principle different for different brain regions and nervous systems in different animal species. Relation and possible contributions of this viewpoint to the extensional model of time consciousness are discussed. Finally, we consider the two alternative views on the structure of consciousness, namely a static and a dynamic one, and argue in favor of the latter, proposing that consciousness can be best understood if change is considered its only dimension.
[ { "created": "Tue, 18 Feb 2020 16:43:58 GMT", "version": "v1" }, { "created": "Sat, 5 Mar 2022 21:16:46 GMT", "version": "v2" } ]
2022-03-08
[ [ "Jura", "Bartosz", "" ] ]
In this theoretical work the temporal aspect of consciousness is analyzed. We start from the notion that while conscious experience seems to change constantly, yet for any of its contents to be consciously perceived they must last for some non-zero duration of time, which appears to constitute certain conflict. We posit that, in terms of phenomenological analysis of consciousness, the temporal aspect, and this apparent conflict in particular, might be the most basic property, likely inherent to any conceivable form of consciousness. It is then outlined how taking this perspective offers a concrete way of relating the properties of consciousness directly to the neural plasticity mechanisms of learning and memory, and specifying how exactly subjective experience might be related to processes of information integration. In particular, we propose synaptic clock to constitute a content-specific neural substrate of consciousness, explaining how it would correspond to this temporal aspect. Then, we propose a viewpoint, in which moments of subjective time have different durations, depending on the type of information processed, proportional to the time units of corresponding synaptic clocks, and being in principle different for different brain regions and nervous systems in different animal species. Relation and possible contributions of this viewpoint to the extensional model of time consciousness are discussed. Finally, we consider the two alternative views on the structure of consciousness, namely a static and a dynamic one, and argue in favor of the latter, proposing that consciousness can be best understood if change is considered its only dimension.
1904.01637
Safoora Yousefi
Safoora Yousefi, Amirreza Shaban, Mohamed Amgad, Ramraj Chandradevan, Lee A. D. Cooper
Learning Clinical Outcomes from Heterogeneous Genomic Data Sources
null
null
null
null
q-bio.QM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Translating the vast data generated by genomic platforms into reliable predictions of clinical outcomes remains a critical challenge in realizing the promise of genomic medicine largely due to small number of independent samples. In this paper, we show that neural networks can be trained to predict clinical outcomes using heterogeneous genomic data sources via multi-task learning and adversarial representation learning, allowing one to combine multiple cohorts and outcomes in training. We compare our proposed method to two baselines and demonstrate that it can be used to help mitigate the data scarcity and clinical outcome censorship in cancer genomics learning problems.
[ { "created": "Tue, 2 Apr 2019 19:36:51 GMT", "version": "v1" } ]
2019-04-04
[ [ "Yousefi", "Safoora", "" ], [ "Shaban", "Amirreza", "" ], [ "Amgad", "Mohamed", "" ], [ "Chandradevan", "Ramraj", "" ], [ "Cooper", "Lee A. D.", "" ] ]
Translating the vast data generated by genomic platforms into reliable predictions of clinical outcomes remains a critical challenge in realizing the promise of genomic medicine largely due to small number of independent samples. In this paper, we show that neural networks can be trained to predict clinical outcomes using heterogeneous genomic data sources via multi-task learning and adversarial representation learning, allowing one to combine multiple cohorts and outcomes in training. We compare our proposed method to two baselines and demonstrate that it can be used to help mitigate the data scarcity and clinical outcome censorship in cancer genomics learning problems.
0801.1012
George Tsibidis
George D. Tsibidis and Nektarios Tavernarakis
Nemo: a computational tool for analyzing nematode locomotion
12 pages, 2 figures. accepted by BMC Neuroscience 2007, 8:86
BMC Neuroscience 2007, 8:86
null
null
q-bio.OT q-bio.GN q-bio.QM
null
The nematode Caenorhabditis elegans responds to an impressive range of chemical, mechanical and thermal stimuli and is extensively used to investigate the molecular mechanisms that mediate chemosensation, mechanotransduction and thermosensation. The main behavioral output of these responses is manifested as alterations in animal locomotion. Monitoring and examination of such alterations requires tools to capture and quantify features of nematode movement. In this paper, we introduce Nemo (nematode movement), a computationally efficient and robust two-dimensional object tracking algorithm for automated detection and analysis of C. elegans locomotion. This algorithm enables precise measurement and feature extraction of nematode movement components. In addition, we develop a Graphical User Interface designed to facilitate processing and interpretation of movement data. While, in this study, we focus on the simple sinusoidal locomotion of C. elegans, our approach can be readily adapted to handle complicated locomotory behaviour patterns by including additional movement characteristics and parameters subject to quantification. Our software tool offers the capacity to extract, analyze and measure nematode locomotion features by processing simple video files. By allowing precise and quantitative assessment of behavioral traits, this tool will assist the genetic dissection and elucidation of the molecular mechanisms underlying specific behavioral responses.
[ { "created": "Mon, 7 Jan 2008 15:00:52 GMT", "version": "v1" } ]
2008-02-21
[ [ "Tsibidis", "George D.", "" ], [ "Tavernarakis", "Nektarios", "" ] ]
The nematode Caenorhabditis elegans responds to an impressive range of chemical, mechanical and thermal stimuli and is extensively used to investigate the molecular mechanisms that mediate chemosensation, mechanotransduction and thermosensation. The main behavioral output of these responses is manifested as alterations in animal locomotion. Monitoring and examination of such alterations requires tools to capture and quantify features of nematode movement. In this paper, we introduce Nemo (nematode movement), a computationally efficient and robust two-dimensional object tracking algorithm for automated detection and analysis of C. elegans locomotion. This algorithm enables precise measurement and feature extraction of nematode movement components. In addition, we develop a Graphical User Interface designed to facilitate processing and interpretation of movement data. While, in this study, we focus on the simple sinusoidal locomotion of C. elegans, our approach can be readily adapted to handle complicated locomotory behaviour patterns by including additional movement characteristics and parameters subject to quantification. Our software tool offers the capacity to extract, analyze and measure nematode locomotion features by processing simple video files. By allowing precise and quantitative assessment of behavioral traits, this tool will assist the genetic dissection and elucidation of the molecular mechanisms underlying specific behavioral responses.
2102.11013
Juntang Zhuang
Juntang Zhuang, Nicha Dvornek, Sekhar Tatikonda, Xenophon Papademetris, Pamela Ventola, James Duncan
Multiple-shooting adjoint method for whole-brain dynamic causal modeling
27th International Conference on Information Processing in Medical Imaging
null
null
null
q-bio.NC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dynamic causal modeling (DCM) is a Bayesian framework to infer directed connections between compartments, and has been used to describe the interactions between underlying neural populations based on functional neuroimaging data. DCM is typically analyzed with the expectation-maximization (EM) algorithm. However, because the inversion of a large-scale continuous system is difficult when noisy observations are present, DCM by EM is typically limited to a small number of compartments ($<10$). Another drawback with the current method is its complexity; when the forward model changes, the posterior mean changes, and we need to re-derive the algorithm for optimization. In this project, we propose the Multiple-Shooting Adjoint (MSA) method to address these limitations. MSA uses the multiple-shooting method for parameter estimation in ordinary differential equations (ODEs) under noisy observations, and is suitable for large-scale systems such as whole-brain analysis in functional MRI (fMRI). Furthermore, MSA uses the adjoint method for accurate gradient estimation in the ODE; since the adjoint method is generic, MSA is a generic method for both linear and non-linear systems, and does not require re-derivation of the algorithm as in EM. We validate MSA in extensive experiments: 1) in toy examples with both linear and non-linear models, we show that MSA achieves better accuracy in parameter value estimation than EM; furthermore, MSA can be successfully applied to large systems with up to 100 compartments; and 2) using real fMRI data, we apply MSA to the estimation of the whole-brain effective connectome and show improved classification of autism spectrum disorder (ASD) vs. control compared to using the functional connectome. The package is provided \url{https://jzkay12.github.io/TorchDiffEqPack}
[ { "created": "Sun, 14 Feb 2021 05:00:12 GMT", "version": "v1" } ]
2021-02-23
[ [ "Zhuang", "Juntang", "" ], [ "Dvornek", "Nicha", "" ], [ "Tatikonda", "Sekhar", "" ], [ "Papademetris", "Xenophon", "" ], [ "Ventola", "Pamela", "" ], [ "Duncan", "James", "" ] ]
Dynamic causal modeling (DCM) is a Bayesian framework to infer directed connections between compartments, and has been used to describe the interactions between underlying neural populations based on functional neuroimaging data. DCM is typically analyzed with the expectation-maximization (EM) algorithm. However, because the inversion of a large-scale continuous system is difficult when noisy observations are present, DCM by EM is typically limited to a small number of compartments ($<10$). Another drawback with the current method is its complexity; when the forward model changes, the posterior mean changes, and we need to re-derive the algorithm for optimization. In this project, we propose the Multiple-Shooting Adjoint (MSA) method to address these limitations. MSA uses the multiple-shooting method for parameter estimation in ordinary differential equations (ODEs) under noisy observations, and is suitable for large-scale systems such as whole-brain analysis in functional MRI (fMRI). Furthermore, MSA uses the adjoint method for accurate gradient estimation in the ODE; since the adjoint method is generic, MSA is a generic method for both linear and non-linear systems, and does not require re-derivation of the algorithm as in EM. We validate MSA in extensive experiments: 1) in toy examples with both linear and non-linear models, we show that MSA achieves better accuracy in parameter value estimation than EM; furthermore, MSA can be successfully applied to large systems with up to 100 compartments; and 2) using real fMRI data, we apply MSA to the estimation of the whole-brain effective connectome and show improved classification of autism spectrum disorder (ASD) vs. control compared to using the functional connectome. The package is provided \url{https://jzkay12.github.io/TorchDiffEqPack}
1609.06424
{\O}rnulf Borgan
{\O}rnulf Borgan (Department of Mathematics, University of Oslo)
Do Japanese and Italian women live longer than women in Scandinavia?
7 pages, 3 figures
null
null
null
q-bio.PE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Life expectancies at birth are routinely computed from period life tables. Such period life expectancies may be distorted by selection when comparing countries where the living conditions improved earlier (like Norway and Sweden) with countries where they improved later (like Italy and Japan). One way to get a fair comparison between the countries, is to use cohort data and consider the expected number of years lost before a given age a. Contrary to the results based on period data, one then finds that Italian women may expect to lose more years than women in Norway and Sweden, while there are no indications that Japanese women will lose fewer years than Scandinavian women.
[ { "created": "Wed, 21 Sep 2016 06:13:03 GMT", "version": "v1" } ]
2016-09-22
[ [ "Borgan", "Ørnulf", "", "Department of Mathematics, University of Oslo" ] ]
Life expectancies at birth are routinely computed from period life tables. Such period life expectancies may be distorted by selection when comparing countries where the living conditions improved earlier (like Norway and Sweden) with countries where they improved later (like Italy and Japan). One way to get a fair comparison between the countries, is to use cohort data and consider the expected number of years lost before a given age a. Contrary to the results based on period data, one then finds that Italian women may expect to lose more years than women in Norway and Sweden, while there are no indications that Japanese women will lose fewer years than Scandinavian women.
1809.01281
Nadine Chang
Nadine Chang, John A. Pyles, Abhinav Gupta, Michael J. Tarr, Elissa M. Aminoff
BOLD5000: A public fMRI dataset of 5000 images
Currently in submission to Scientific Data
null
10.1038/s41597-019-0052-3
null
q-bio.NC cs.CV
http://creativecommons.org/licenses/by/4.0/
Vision science, particularly machine vision, has been revolutionized by introducing large-scale image datasets and statistical learning approaches. Yet, human neuroimaging studies of visual perception still rely on small numbers of images (around 100) due to time-constrained experimental procedures. To apply statistical learning approaches that integrate neuroscience, the number of images used in neuroimaging must be significantly increased. We present BOLD5000, a human functional MRI (fMRI) study that includes almost 5,000 distinct images depicting real-world scenes. Beyond dramatically increasing image dataset size relative to prior fMRI studies, BOLD5000 also accounts for image diversity, overlapping with standard computer vision datasets by incorporating images from the Scene UNderstanding (SUN), Common Objects in Context (COCO), and ImageNet datasets. The scale and diversity of these image datasets, combined with a slow event-related fMRI design, enable fine-grained exploration into the neural representation of a wide range of visual features, categories, and semantics. Concurrently, BOLD5000 brings us closer to realizing Marr's dream of a singular vision science - the intertwined study of biological and computer vision.
[ { "created": "Wed, 5 Sep 2018 00:50:34 GMT", "version": "v1" } ]
2020-01-23
[ [ "Chang", "Nadine", "" ], [ "Pyles", "John A.", "" ], [ "Gupta", "Abhinav", "" ], [ "Tarr", "Michael J.", "" ], [ "Aminoff", "Elissa M.", "" ] ]
Vision science, particularly machine vision, has been revolutionized by introducing large-scale image datasets and statistical learning approaches. Yet, human neuroimaging studies of visual perception still rely on small numbers of images (around 100) due to time-constrained experimental procedures. To apply statistical learning approaches that integrate neuroscience, the number of images used in neuroimaging must be significantly increased. We present BOLD5000, a human functional MRI (fMRI) study that includes almost 5,000 distinct images depicting real-world scenes. Beyond dramatically increasing image dataset size relative to prior fMRI studies, BOLD5000 also accounts for image diversity, overlapping with standard computer vision datasets by incorporating images from the Scene UNderstanding (SUN), Common Objects in Context (COCO), and ImageNet datasets. The scale and diversity of these image datasets, combined with a slow event-related fMRI design, enable fine-grained exploration into the neural representation of a wide range of visual features, categories, and semantics. Concurrently, BOLD5000 brings us closer to realizing Marr's dream of a singular vision science - the intertwined study of biological and computer vision.
1510.07992
Ben Nolting
Christopher M. Moore, Christopher R. Stieha, Ben C. Nolting, Maria K. Cameron, and Karen C. Abbott
QPot: An R Package for Stochastic Differential Equation Quasi-Potential Analysis
null
null
null
null
q-bio.QM math.DS math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
QPot is an R package for analyzing two-dimensional systems of stochastic differential equations. It provides users with a wide range of tools to simulate, analyze, and visualize the dynamics of these systems. One of QPot's key features is the computation of the quasi-potential, an important tool for studying stochastic systems. Quasi-potentials are particularly useful for comparing the relative stabilities of equilibria in systems with alternative stable states. This paper describes QPot's primary functions, and explains how quasi-potentials can yield insights about the dynamics of stochastic systems. Three worked examples guide users through the application of QPot's functions.
[ { "created": "Tue, 27 Oct 2015 17:10:57 GMT", "version": "v1" } ]
2015-10-28
[ [ "Moore", "Christopher M.", "" ], [ "Stieha", "Christopher R.", "" ], [ "Nolting", "Ben C.", "" ], [ "Cameron", "Maria K.", "" ], [ "Abbott", "Karen C.", "" ] ]
QPot is an R package for analyzing two-dimensional systems of stochastic differential equations. It provides users with a wide range of tools to simulate, analyze, and visualize the dynamics of these systems. One of QPot's key features is the computation of the quasi-potential, an important tool for studying stochastic systems. Quasi-potentials are particularly useful for comparing the relative stabilities of equilibria in systems with alternative stable states. This paper describes QPot's primary functions, and explains how quasi-potentials can yield insights about the dynamics of stochastic systems. Three worked examples guide users through the application of QPot's functions.
1005.1142
Atsushi Kamimura
Atsushi Kamimura and Kunihiko Kaneko
Reproduction of a Protocell by Replication of Minority Molecule in Catalytic Reaction Network
13 pages, 7 figures, submitted for publication
null
10.1103/PhysRevLett.105.268103
null
q-bio.CB cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For understanding the origin of life, it is essential to explain the development of a compartmentalized structure, which undergoes growth and division, from a set of chemical reactions. In this study, a hypercycle with two chemicals that mutually catalyze each other is considered in order to show that the reproduction of a protocell with a growth-division process naturally occurs when the replication speed of one chemical is considerably slower than that of the other chemical. It is observed that the protocell divides after a minority molecule is replicated at a slow synthesis rate, and thus, a synchrony between the reproduction of a cell and molecule replication is achieved. The robustness of such protocells against the invasion of parasitic molecules is also demonstrated.
[ { "created": "Fri, 7 May 2010 07:20:29 GMT", "version": "v1" } ]
2015-05-18
[ [ "Kamimura", "Atsushi", "" ], [ "Kaneko", "Kunihiko", "" ] ]
For understanding the origin of life, it is essential to explain the development of a compartmentalized structure, which undergoes growth and division, from a set of chemical reactions. In this study, a hypercycle with two chemicals that mutually catalyze each other is considered in order to show that the reproduction of a protocell with a growth-division process naturally occurs when the replication speed of one chemical is considerably slower than that of the other chemical. It is observed that the protocell divides after a minority molecule is replicated at a slow synthesis rate, and thus, a synchrony between the reproduction of a cell and molecule replication is achieved. The robustness of such protocells against the invasion of parasitic molecules is also demonstrated.
1807.03091
Aleksandr Aravkin
Chris Vogl, Peng Zheng, Stephen P. Seslar, and Aleksandr Y. Aravkin
Computer Assisted Localization of a Heart Arrhythmia
4 pages, 5 figures
null
null
null
q-bio.TO math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of locating a point-source heart arrhythmia using data from a standard diagnostic procedure, where a reference catheter is placed in the heart, and arrival times from a second diagnostic catheter are recorded as the diagnostic catheter moves around within the heart. We model this situation as a nonconvex feasibility problem, where given a set of arrival times, we look for a source location that is consistent with the available data. We develop a new optimization approach and fast algorithm to obtain online proposals for the next location to suggest to the operator as she collects data. We validate the procedure using a Monte Carlo simulation based on patients' electrophysiological data. The proposed procedure robustly and quickly locates the source of arrhythmias without any prior knowledge of heart anatomy.
[ { "created": "Mon, 9 Jul 2018 13:06:38 GMT", "version": "v1" } ]
2018-07-10
[ [ "Vogl", "Chris", "" ], [ "Zheng", "Peng", "" ], [ "Seslar", "Stephen P.", "" ], [ "Aravkin", "Aleksandr Y.", "" ] ]
We consider the problem of locating a point-source heart arrhythmia using data from a standard diagnostic procedure, where a reference catheter is placed in the heart, and arrival times from a second diagnostic catheter are recorded as the diagnostic catheter moves around within the heart. We model this situation as a nonconvex feasibility problem, where given a set of arrival times, we look for a source location that is consistent with the available data. We develop a new optimization approach and fast algorithm to obtain online proposals for the next location to suggest to the operator as she collects data. We validate the procedure using a Monte Carlo simulation based on patients' electrophysiological data. The proposed procedure robustly and quickly locates the source of arrhythmias without any prior knowledge of heart anatomy.
2012.15681
William Bialek
Vasyl Alba, Gordon J. Berman, William Bialek, and Joshua W. Shaevitz
Exploring a strongly non-Markovian animal behavior
null
null
null
null
q-bio.NC cond-mat.stat-mech
http://creativecommons.org/licenses/by/4.0/
A freely walking fly visits roughly 100 stereotyped states in a strongly non-Markovian sequence. To explore these dynamics, we develop a generalization of the information bottleneck method, compressing the large number of behavioral states into a more compact description that maximally preserves the correlations between successive states. Surprisingly, preserving these short time correlations with a compression into just two states captures the long ranged correlations seen in the raw data. Having reduced the behavior to a binary sequence, we describe the distribution of these sequences by an Ising model with pairwise interactions, which is the maximum entropy model that matches the two-point correlations. Matching the correlation function at longer and longer times drives the resulting model toward the Ising model with inverse square interactions and near zero magnetic field. The emergence of this statistical physics problem from the analysis real data on animal behavior is unexpected.
[ { "created": "Thu, 31 Dec 2020 16:02:44 GMT", "version": "v1" } ]
2021-01-01
[ [ "Alba", "Vasyl", "" ], [ "Berman", "Gordon J.", "" ], [ "Bialek", "William", "" ], [ "Shaevitz", "Joshua W.", "" ] ]
A freely walking fly visits roughly 100 stereotyped states in a strongly non-Markovian sequence. To explore these dynamics, we develop a generalization of the information bottleneck method, compressing the large number of behavioral states into a more compact description that maximally preserves the correlations between successive states. Surprisingly, preserving these short time correlations with a compression into just two states captures the long ranged correlations seen in the raw data. Having reduced the behavior to a binary sequence, we describe the distribution of these sequences by an Ising model with pairwise interactions, which is the maximum entropy model that matches the two-point correlations. Matching the correlation function at longer and longer times drives the resulting model toward the Ising model with inverse square interactions and near zero magnetic field. The emergence of this statistical physics problem from the analysis real data on animal behavior is unexpected.
1710.02362
Cyril Karamaoun
Cyril Karamaoun, Beno\^it Haut and Alain Van Muylem
A new role for exhaled nitric oxide as a functional marker of peripheral airway caliber changes: a theoretical study
null
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Though considered as an inflammation marker, exhaled nitric oxide (FENO) was shown to be sensitive to airway caliber changes to such an extent that it might be considered as a marker of them. It is thus important to understand how these changes and their localization mechanically affect the total NO flux penetrating the airway lumen (JawNO), hence FENO, independently from any inflammatory status change. A new model was used which simulates NO production, consumption and diffusion inside the airway epithelium wall, then, NO excretion through the epithelial wall into the airway lumen and, finally, its axial transport by diffusion and convection in the airway lumen. This model may also consider the presence of a mucus layer coating the epithelial wall. Simulations were performed that showed the great sensitivity of JawNO to peripheral airways caliber changes. Moreover, FENO showed distinct behaviors depending on the location of the caliber change. Considering a bronchodilation, absence of FENO change was associated with dilation of central airways, FENO increase with dilation up to pre-acinar small airways, and FENO decrease with intra-acinar dilation due to amplification of the back-diffusion flux. The presence of a mucus layer was also shown to play a significant role in FENO changes. Altogether, the present work provides theoretical evidences that specific FENO changes in acute situations are linked to specifically located airway caliber changes in the lung periphery. This opens the way for a new role for FENO as a functional marker of peripheral airway caliber change.
[ { "created": "Fri, 6 Oct 2017 12:07:33 GMT", "version": "v1" } ]
2017-10-09
[ [ "Karamaoun", "Cyril", "" ], [ "Haut", "Benoît", "" ], [ "Van Muylem", "Alain", "" ] ]
Though considered as an inflammation marker, exhaled nitric oxide (FENO) was shown to be sensitive to airway caliber changes to such an extent that it might be considered as a marker of them. It is thus important to understand how these changes and their localization mechanically affect the total NO flux penetrating the airway lumen (JawNO), hence FENO, independently from any inflammatory status change. A new model was used which simulates NO production, consumption and diffusion inside the airway epithelium wall, then, NO excretion through the epithelial wall into the airway lumen and, finally, its axial transport by diffusion and convection in the airway lumen. This model may also consider the presence of a mucus layer coating the epithelial wall. Simulations were performed that showed the great sensitivity of JawNO to peripheral airways caliber changes. Moreover, FENO showed distinct behaviors depending on the location of the caliber change. Considering a bronchodilation, absence of FENO change was associated with dilation of central airways, FENO increase with dilation up to pre-acinar small airways, and FENO decrease with intra-acinar dilation due to amplification of the back-diffusion flux. The presence of a mucus layer was also shown to play a significant role in FENO changes. Altogether, the present work provides theoretical evidences that specific FENO changes in acute situations are linked to specifically located airway caliber changes in the lung periphery. This opens the way for a new role for FENO as a functional marker of peripheral airway caliber change.
1104.5008
William Bruno Ph.D.
William J. Bruno
What does photon energy tell us about cellphone safety?
6 pages. Revision includes appended response to published critique by B. Leikind
null
null
null
q-bio.OT physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has been argued that cellphones are safe because a single microwave photon does not have enough energy to break a chemical bond. We show that cellphone technology operates in the classical wave limit, not the single photon limit. Based on energy densities relative to thermal energy, we estimate thresholds at which effects might be expected. These seem to correspond somewhat with many experimental observations. Revised with appendix responding to critique published by B. Leikind.
[ { "created": "Tue, 26 Apr 2011 19:58:26 GMT", "version": "v1" }, { "created": "Tue, 25 Apr 2017 03:00:34 GMT", "version": "v2" } ]
2017-04-26
[ [ "Bruno", "William J.", "" ] ]
It has been argued that cellphones are safe because a single microwave photon does not have enough energy to break a chemical bond. We show that cellphone technology operates in the classical wave limit, not the single photon limit. Based on energy densities relative to thermal energy, we estimate thresholds at which effects might be expected. These seem to correspond somewhat with many experimental observations. Revised with appendix responding to critique published by B. Leikind.
1911.06921
Joseph Natale
Joseph L. Natale, H. George E. Hentschel, Ilya Nemenman
Precise Spatial Memory in Local Random Networks
9 pages, 5 figures; presented at APS March Meeting 2019 conference (http://meetings.aps.org/Meeting/MAR19/Session/R67.10)
Phys. Rev. E 102, 022405 (2020)
10.1103/PhysRevE.102.022405
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Self-sustained, elevated neuronal activity persisting on time scales of ten seconds or longer is thought to be vital for aspects of working memory, including brain representations of real space. Continuous-attractor neural networks, one of the most well-known modeling frameworks for persistent activity, have been able to model crucial aspects of such spatial memory. These models tend to require highly structured or regular synaptic architectures. In contrast, we elaborate a geometrically-embedded model with a local but otherwise random connectivity profile which, combined with a global regulation of the mean firing rate, produces localized, finely spaced discrete attractors that effectively span a 2D manifold. We demonstrate how the set of attracting states can reliably encode a representation of the spatial locations at which the system receives external input, thereby accomplishing spatial memory via attractor dynamics without synaptic fine-tuning or regular structure. We measure the network's storage capacity and find that the statistics of retrievable positions are also equivalent to a full tiling of the plane, something hitherto achievable only with (approximately) translationally invariant synapses, and which may be of interest in modeling such biological phenomena as visuospatial working memory in two dimensions.
[ { "created": "Sat, 16 Nov 2019 00:24:04 GMT", "version": "v1" } ]
2020-08-19
[ [ "Natale", "Joseph L.", "" ], [ "Hentschel", "H. George E.", "" ], [ "Nemenman", "Ilya", "" ] ]
Self-sustained, elevated neuronal activity persisting on time scales of ten seconds or longer is thought to be vital for aspects of working memory, including brain representations of real space. Continuous-attractor neural networks, one of the most well-known modeling frameworks for persistent activity, have been able to model crucial aspects of such spatial memory. These models tend to require highly structured or regular synaptic architectures. In contrast, we elaborate a geometrically-embedded model with a local but otherwise random connectivity profile which, combined with a global regulation of the mean firing rate, produces localized, finely spaced discrete attractors that effectively span a 2D manifold. We demonstrate how the set of attracting states can reliably encode a representation of the spatial locations at which the system receives external input, thereby accomplishing spatial memory via attractor dynamics without synaptic fine-tuning or regular structure. We measure the network's storage capacity and find that the statistics of retrievable positions are also equivalent to a full tiling of the plane, something hitherto achievable only with (approximately) translationally invariant synapses, and which may be of interest in modeling such biological phenomena as visuospatial working memory in two dimensions.
1510.04180
Andrew Hart PhD
Andrew Hart and Servet Mart\'inez
An Entropy-Based Technique for Classifying Bacterial Chromosomes According to Synonymous Codon Usage
22 pages, 3 figures
null
null
null
q-bio.GN math.PR stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a framework based on conditional entropy and the Dirichlet distribution for classifying chromosomes based on the degree to which they use synonymous codons uniformly or preferentially, that is, whether or not codons that code for an amino acid appear with the same relative frequency. Applying the approach to a large collection of annotated bacterial chromosomes reveals three distinct groups of bacteria.
[ { "created": "Wed, 14 Oct 2015 16:03:09 GMT", "version": "v1" } ]
2015-10-15
[ [ "Hart", "Andrew", "" ], [ "Martínez", "Servet", "" ] ]
We present a framework based on conditional entropy and the Dirichlet distribution for classifying chromosomes based on the degree to which they use synonymous codons uniformly or preferentially, that is, whether or not codons that code for an amino acid appear with the same relative frequency. Applying the approach to a large collection of annotated bacterial chromosomes reveals three distinct groups of bacteria.
1712.00919
Erin Gorsich
Erin E. Gorsich, Rampal S. Etienne, Jan Medlock, Brianna R. Beechler, Johannie M. Spaan, Robert S. Spaan, Vanessa O. Ezenwa, Anna E. Jolles
Interactions between chronic diseases: asymmetric outcomes of co-infection at individual and population scales
13 pages, 4 figures, 3 documents in the supporting information
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Co-infecting parasites and pathogens remain a leading challenge for global public health due to their consequences for individual-level infection risk and disease progression. However, a clear understanding of the population-level consequences of co-infection is lacking. Here, we constructed a model that includes three individual-level effects of co-infection: mortality, fecundity, and transmission. We used the model to investigate how these individual-level consequences of co-infection scale up to produce population-level infection patterns. To parameterize this model, we conducted a four-year cohort study in African buffalo to estimate the individual-level effects of co-infection with two bacterial pathogens, bovine tuberculosis (BTB) and brucellosis, across a range of demographic and environmental contexts. At the individual-level, our empirical results identified BTB as a risk factor for acquiring brucellosis, but we found no association between brucellosis and the risk of acquiring BTB. Both infections were associated with reductions in survival and neither infection was associated with reductions in fecundity. Results of the model reproduce co-infection patterns in the data and predict opposite impacts of co-infection at individual and population scales: whereas BTB facilitated brucellosis infection at the individual-level, our model predicts the presence of brucellosis to have a strong negative impact on BTB at the population-level. In modeled populations where brucellosis is present, the endemic prevalence and basic reproduction number (Ro) of BTB were lower than in populations without brucellosis. Therefore, these results provide a data-driven example of competition between co-infecting pathogens that occurs when one pathogen facilitates secondary infections at the individual level.
[ { "created": "Mon, 4 Dec 2017 06:15:22 GMT", "version": "v1" } ]
2017-12-05
[ [ "Gorsich", "Erin E.", "" ], [ "Etienne", "Rampal S.", "" ], [ "Medlock", "Jan", "" ], [ "Beechler", "Brianna R.", "" ], [ "Spaan", "Johannie M.", "" ], [ "Spaan", "Robert S.", "" ], [ "Ezenwa", "Vanessa O.", "" ], [ "Jolles", "Anna E.", "" ] ]
Co-infecting parasites and pathogens remain a leading challenge for global public health due to their consequences for individual-level infection risk and disease progression. However, a clear understanding of the population-level consequences of co-infection is lacking. Here, we constructed a model that includes three individual-level effects of co-infection: mortality, fecundity, and transmission. We used the model to investigate how these individual-level consequences of co-infection scale up to produce population-level infection patterns. To parameterize this model, we conducted a four-year cohort study in African buffalo to estimate the individual-level effects of co-infection with two bacterial pathogens, bovine tuberculosis (BTB) and brucellosis, across a range of demographic and environmental contexts. At the individual-level, our empirical results identified BTB as a risk factor for acquiring brucellosis, but we found no association between brucellosis and the risk of acquiring BTB. Both infections were associated with reductions in survival and neither infection was associated with reductions in fecundity. Results of the model reproduce co-infection patterns in the data and predict opposite impacts of co-infection at individual and population scales: whereas BTB facilitated brucellosis infection at the individual-level, our model predicts the presence of brucellosis to have a strong negative impact on BTB at the population-level. In modeled populations where brucellosis is present, the endemic prevalence and basic reproduction number (Ro) of BTB were lower than in populations without brucellosis. Therefore, these results provide a data-driven example of competition between co-infecting pathogens that occurs when one pathogen facilitates secondary infections at the individual level.
1001.4914
David Morrison
Meaghan E. Jenkins, David A. Morrison, Tony D. Auld
Estimating seed bank accumulation and dynamics in three obligate-seeder Proteaceae species
15 pages, including 4 Figures and 4 Tables
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The seed bank dynamics of the three co-occurring obligate-seeder (i.e. fire-sensitive) Proteaceae species, Banksia ericifolia, Banksia marginata and Petrophile pulchella, were examined at sites of varying time since the most recent fire (i.e. plant age) in the Sydney region. Significant variation among species was found in the number of cones produced, the position of the cones within the canopy, the percentage of barren cones produced (Banksia species only), the number of follicles/bracts produced per cone, and the number of seeds lost/released due to spontaneous fruit rupture. Thus, three different regeneration strategies were observed, highlighting the variation in reproductive strategies of co-occurring Proteaceae species. Ultimately, B. marginata potentially accumulated a seed bank of c. 3000 seeds per plant after 20 years, with c. 1500 seeds per plant for P. pulchella and c. 500 for B. ericifolia. Based on these data, B. marginata and B. ericifolia require a minimum fire-free period of 8-10 years, with 7-8 years for P. pulchella, to allow for an adequate seed bank to accumulate and thus ensure local persistence of these species in fire-prone habitats.
[ { "created": "Wed, 27 Jan 2010 11:35:02 GMT", "version": "v1" } ]
2010-01-28
[ [ "Jenkins", "Meaghan E.", "" ], [ "Morrison", "David A.", "" ], [ "Auld", "Tony D.", "" ] ]
The seed bank dynamics of the three co-occurring obligate-seeder (i.e. fire-sensitive) Proteaceae species, Banksia ericifolia, Banksia marginata and Petrophile pulchella, were examined at sites of varying time since the most recent fire (i.e. plant age) in the Sydney region. Significant variation among species was found in the number of cones produced, the position of the cones within the canopy, the percentage of barren cones produced (Banksia species only), the number of follicles/bracts produced per cone, and the number of seeds lost/released due to spontaneous fruit rupture. Thus, three different regeneration strategies were observed, highlighting the variation in reproductive strategies of co-occurring Proteaceae species. Ultimately, B. marginata potentially accumulated a seed bank of c. 3000 seeds per plant after 20 years, with c. 1500 seeds per plant for P. pulchella and c. 500 for B. ericifolia. Based on these data, B. marginata and B. ericifolia require a minimum fire-free period of 8-10 years, with 7-8 years for P. pulchella, to allow for an adequate seed bank to accumulate and thus ensure local persistence of these species in fire-prone habitats.
1602.06937
Luis Bonilla L.
F. Terragni, M. Carretero, V. Capasso and L. L. Bonilla
Stochastic Model of Tumor-induced Angiogenesis: Ensemble Averages and Deterministic Equations
26 pages, 8 figures
Phys. Rev. E 93, 022413 (2016)
10.1103/PhysRevE.93.022413
null
q-bio.TO cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A recent conceptual model of tumor-driven angiogenesis including branching, elongation, and anastomosis of blood vessels captures some of the intrinsic multiscale structures of this complex system, yet allowing to extract a deterministic integro-partial differential description of the vessel tip density [Phys. Rev. E 90, 062716 (2014)]. Here we solve the stochastic model, show that ensemble averages over many realizations correspond to the deterministic equations, and fit the anastomosis rate coefficient so that the total number of vessel tips evolves similarly in the deterministic and ensemble averaged stochastic descriptions.
[ { "created": "Mon, 22 Feb 2016 20:54:16 GMT", "version": "v1" } ]
2016-02-24
[ [ "Terragni", "F.", "" ], [ "Carretero", "M.", "" ], [ "Capasso", "V.", "" ], [ "Bonilla", "L. L.", "" ] ]
A recent conceptual model of tumor-driven angiogenesis including branching, elongation, and anastomosis of blood vessels captures some of the intrinsic multiscale structures of this complex system, yet allowing to extract a deterministic integro-partial differential description of the vessel tip density [Phys. Rev. E 90, 062716 (2014)]. Here we solve the stochastic model, show that ensemble averages over many realizations correspond to the deterministic equations, and fit the anastomosis rate coefficient so that the total number of vessel tips evolves similarly in the deterministic and ensemble averaged stochastic descriptions.
2102.04746
Mats Brun PhD
Mats K. Brun, Elyes Ahmed, Jan Martin Nordbotten, Nils Christian Stenseth
Modeling the process of speciation using a multi-scale framework including error estimates
27 pages, 27 figures
null
null
null
q-bio.PE math.AP
http://creativecommons.org/licenses/by/4.0/
This paper concerns the modeling and numerical simulation of the process of speciation. In particular, given conditions for which one or more speciation events within an ecosystem occur, our aim is to develop the necessary modeling and simulation tools. Care is also taken to establish a solid mathematical foundation on which our modeling framework is built. This is the subject of the first half of the paper. The second half is devoted to developing a multi-scale framework for eco-evolutionary modeling, where the relevant scales are that of species and individual/population, respectively. Hence, a system of interacting species can be described at the species level, while for branching species a population level description is necessary. Our multi-scale framework thus consists of coupling the species and population level models where speciation events are detected in advance and then resolved at the population scale until the branching is complete. Moreover, since the population level model is formulated as a PDE, we first establish the well-posedness in the time-discrete setting, and then derive the a posteriori error estimates which provides a fully computable upper bound on an energy-type error, including also for the case of general smooth distributions (which will be useful for the detection of speciation events). Several numerical tests validate our framework in practice.
[ { "created": "Tue, 9 Feb 2021 10:26:16 GMT", "version": "v1" }, { "created": "Thu, 7 Oct 2021 16:17:05 GMT", "version": "v2" } ]
2021-10-08
[ [ "Brun", "Mats K.", "" ], [ "Ahmed", "Elyes", "" ], [ "Nordbotten", "Jan Martin", "" ], [ "Stenseth", "Nils Christian", "" ] ]
This paper concerns the modeling and numerical simulation of the process of speciation. In particular, given conditions for which one or more speciation events within an ecosystem occur, our aim is to develop the necessary modeling and simulation tools. Care is also taken to establish a solid mathematical foundation on which our modeling framework is built. This is the subject of the first half of the paper. The second half is devoted to developing a multi-scale framework for eco-evolutionary modeling, where the relevant scales are that of species and individual/population, respectively. Hence, a system of interacting species can be described at the species level, while for branching species a population level description is necessary. Our multi-scale framework thus consists of coupling the species and population level models where speciation events are detected in advance and then resolved at the population scale until the branching is complete. Moreover, since the population level model is formulated as a PDE, we first establish the well-posedness in the time-discrete setting, and then derive the a posteriori error estimates which provides a fully computable upper bound on an energy-type error, including also for the case of general smooth distributions (which will be useful for the detection of speciation events). Several numerical tests validate our framework in practice.
1106.1723
Mike Steel Prof.
David Bryant and Mike Steel
'Bureaucratic' set systems, and their role in phylogenetics
6 pages, 1 figure
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We say that a collection $\Cc$ of subsets of $X$ is {\em bureaucratic} if every maximal hierarchy on $X$ contained in $\Cc$ is also maximum. We characterise bureaucratic set systems and show how they arise in phylogenetics. This framework has several useful algorithmic consequences: we generalize some earlier results and derive a polynomial-time algorithm for a parsimony problem arising in phylogenetic networks.
[ { "created": "Thu, 9 Jun 2011 07:37:27 GMT", "version": "v1" } ]
2011-06-10
[ [ "Bryant", "David", "" ], [ "Steel", "Mike", "" ] ]
We say that a collection $\Cc$ of subsets of $X$ is {\em bureaucratic} if every maximal hierarchy on $X$ contained in $\Cc$ is also maximum. We characterise bureaucratic set systems and show how they arise in phylogenetics. This framework has several useful algorithmic consequences: we generalize some earlier results and derive a polynomial-time algorithm for a parsimony problem arising in phylogenetic networks.
2204.07904
Madhur Mangalam
Damian G. Kelty-Stephen, Madhur Mangalam
Turing's cascade instability supports the coordination of the mind, brain, and behavior
53 pages, 13 figures
null
null
null
q-bio.NC cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Turing inspired a computer metaphor of the mind and brain that has been handy and has spawned decades of empirical investigation, but he did much more and offered behavioral and cognitive sciences another metaphor--that of the cascade. The time has come to confront Turing's cascading instability, which suggests a geometrical framework driven by power laws and can be studied using multifractal formalism and multiscale probability density function analysis. Here, we review a rapidly growing body of scientific investigations revealing signatures of cascade instability and their consequences for a perceiving, acting, and thinking organism. We review work related to executive functioning (planning to act), postural control (bodily poise for turning plans into action), and effortful perception (action to gather information in a single modality and action to blend multimodal information). We also review findings on neuronal avalanches in the brain, specifically about neural participation in body-wide cascades. Turing's cascade instability blends the mind, brain, and behavior across space and time scales and provides an alternative to the dominant computer metaphor.
[ { "created": "Sun, 17 Apr 2022 02:22:30 GMT", "version": "v1" } ]
2022-04-19
[ [ "Kelty-Stephen", "Damian G.", "" ], [ "Mangalam", "Madhur", "" ] ]
Turing inspired a computer metaphor of the mind and brain that has been handy and has spawned decades of empirical investigation, but he did much more and offered behavioral and cognitive sciences another metaphor--that of the cascade. The time has come to confront Turing's cascading instability, which suggests a geometrical framework driven by power laws and can be studied using multifractal formalism and multiscale probability density function analysis. Here, we review a rapidly growing body of scientific investigations revealing signatures of cascade instability and their consequences for a perceiving, acting, and thinking organism. We review work related to executive functioning (planning to act), postural control (bodily poise for turning plans into action), and effortful perception (action to gather information in a single modality and action to blend multimodal information). We also review findings on neuronal avalanches in the brain, specifically about neural participation in body-wide cascades. Turing's cascade instability blends the mind, brain, and behavior across space and time scales and provides an alternative to the dominant computer metaphor.
q-bio/0511043
Gasper Tkacik
Noam Slonim, Gurinder Singh Atwal, Gasper Tkacik, William Bialek
Information based clustering
To appear in Proceedings of the National Academy of Sciences USA, 11 pages, 9 figures
null
10.1073/pnas.0507432102
null
q-bio.QM
null
In an age of increasingly large data sets, investigators in many different disciplines have turned to clustering as a tool for data analysis and exploration. Existing clustering methods, however, typically depend on several nontrivial assumptions about the structure of data. Here we reformulate the clustering problem from an information theoretic perspective which avoids many of these assumptions. In particular, our formulation obviates the need for defining a cluster "prototype", does not require an a priori similarity metric, is invariant to changes in the representation of the data, and naturally captures non-linear relations. We apply this approach to different domains and find that it consistently produces clusters that are more coherent than those extracted by existing algorithms. Finally, our approach provides a way of clustering based on collective notions of similarity rather than the traditional pairwise measures.
[ { "created": "Sat, 26 Nov 2005 04:53:47 GMT", "version": "v1" } ]
2009-11-11
[ [ "Slonim", "Noam", "" ], [ "Atwal", "Gurinder Singh", "" ], [ "Tkacik", "Gasper", "" ], [ "Bialek", "William", "" ] ]
In an age of increasingly large data sets, investigators in many different disciplines have turned to clustering as a tool for data analysis and exploration. Existing clustering methods, however, typically depend on several nontrivial assumptions about the structure of data. Here we reformulate the clustering problem from an information theoretic perspective which avoids many of these assumptions. In particular, our formulation obviates the need for defining a cluster "prototype", does not require an a priori similarity metric, is invariant to changes in the representation of the data, and naturally captures non-linear relations. We apply this approach to different domains and find that it consistently produces clusters that are more coherent than those extracted by existing algorithms. Finally, our approach provides a way of clustering based on collective notions of similarity rather than the traditional pairwise measures.
1210.7414
Asaf Gal
Asaf Gal and Shimon Marom
Self-organized criticality in single neuron excitability
5 pages, 2 figures
Phys. Rev. E 88, 062717 (2013)
10.1103/PhysRevE.88.062717
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present experimental and theoretical arguments, at the single neuron level, suggesting that neuronal response fluctuations reflect a process that positions the neuron near a transition point that separates excitable and unexcitable phases. This view is supported by the dynamical properties of the system as observed in experiments on isolated cultured cortical neurons, as well as by a theoretical mapping between the constructs of self organized criticality and membrane excitability biophysics.
[ { "created": "Sun, 28 Oct 2012 07:30:49 GMT", "version": "v1" }, { "created": "Tue, 30 Oct 2012 19:34:24 GMT", "version": "v2" }, { "created": "Wed, 31 Oct 2012 15:21:20 GMT", "version": "v3" }, { "created": "Wed, 27 Feb 2013 11:46:34 GMT", "version": "v4" }, { "created": "Wed, 7 Aug 2013 10:35:08 GMT", "version": "v5" } ]
2013-12-25
[ [ "Gal", "Asaf", "" ], [ "Marom", "Shimon", "" ] ]
We present experimental and theoretical arguments, at the single neuron level, suggesting that neuronal response fluctuations reflect a process that positions the neuron near a transition point that separates excitable and unexcitable phases. This view is supported by the dynamical properties of the system as observed in experiments on isolated cultured cortical neurons, as well as by a theoretical mapping between the constructs of self organized criticality and membrane excitability biophysics.
2008.10992
Tim Friede
Tobias M\"utze and Tim Friede
Data monitoring committees for clinical trials evaluating treatments of COVID-19
null
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The first cases of coronavirus disease 2019 (COVID-19) were reported in December 2019 and the outbreak of SARS-CoV-2 was declared a pandemic in March 2020 by the World Health Organization. This sparked a plethora of investigations into diagnostics and vaccination for SARS-CoV-2, as well as treatments for COVID-19. Since COVID-19 is a severe disease associated with a high mortality, clinical trials in this disease should be monitored by a data monitoring committee (DMC), also known as data safety monitoring board (DSMB). DMCs in this indication face a number of challenges including fast recruitment requiring an unusually high frequency of safety reviews, more frequent use of complex designs and virtually no prior experience with the disease. In this paper, we provide a perspective on the work of DMCs for clinical trials of treatments for COVID-19. More specifically, we discuss organizational aspects of setting up and running DMCs for COVID-19 trials, in particular for trials with more complex designs such as platform trials or adaptive designs. Furthermore, statistical aspects of monitoring clinical trials of treatments for COVID-19 are considered. Some recommendations are made regarding the presentation of the data, stopping rules for safety monitoring and the use of external data. The proposed stopping boundaries are assessed in a simulation study motivated by clinical trials in COVID-19.
[ { "created": "Thu, 20 Aug 2020 08:59:34 GMT", "version": "v1" } ]
2020-08-26
[ [ "Mütze", "Tobias", "" ], [ "Friede", "Tim", "" ] ]
The first cases of coronavirus disease 2019 (COVID-19) were reported in December 2019 and the outbreak of SARS-CoV-2 was declared a pandemic in March 2020 by the World Health Organization. This sparked a plethora of investigations into diagnostics and vaccination for SARS-CoV-2, as well as treatments for COVID-19. Since COVID-19 is a severe disease associated with a high mortality, clinical trials in this disease should be monitored by a data monitoring committee (DMC), also known as data safety monitoring board (DSMB). DMCs in this indication face a number of challenges including fast recruitment requiring an unusually high frequency of safety reviews, more frequent use of complex designs and virtually no prior experience with the disease. In this paper, we provide a perspective on the work of DMCs for clinical trials of treatments for COVID-19. More specifically, we discuss organizational aspects of setting up and running DMCs for COVID-19 trials, in particular for trials with more complex designs such as platform trials or adaptive designs. Furthermore, statistical aspects of monitoring clinical trials of treatments for COVID-19 are considered. Some recommendations are made regarding the presentation of the data, stopping rules for safety monitoring and the use of external data. The proposed stopping boundaries are assessed in a simulation study motivated by clinical trials in COVID-19.
0903.0731
David Lusseau
David Lusseau and Larissa Conradt
The emergence of unshared consensus decisions in bottlenose dolphins
17 pages, 3 figures. in press as part of the special issue "Social Networks: new perspectives" of Behavioral Ecology and Sociobiology
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unshared consensus decision-making processes, in which one or a small number of individuals make the decision for the rest of a group, are rarely documented. However, this mechanism can be beneficial for all group members when one individual has greater knowledge about the benefits of the decision than other group members. Such decisions are reached during certain activity shifts within the population of bottlenose dolphins residing in Doubtful Sound, New Zealand. Behavioral signals are performed by one individual and seem to precipitate shifts in the behavior of the entire group: side flops are performed by males and initiate traveling bouts while upside-down lobtails are performed by females and terminate traveling bouts. However, these signals are not observed at all activity shifts. We find that while side flops were performed by males that have greater knowledge than other male group members, this was not the case for females performing upside-down lobtails. The reason for this could have been that a generally high knowledge about the optimal timing of travel terminations rendered it less important which individual female made the decision.
[ { "created": "Wed, 4 Mar 2009 11:11:31 GMT", "version": "v1" } ]
2009-03-05
[ [ "Lusseau", "David", "" ], [ "Conradt", "Larissa", "" ] ]
Unshared consensus decision-making processes, in which one or a small number of individuals make the decision for the rest of a group, are rarely documented. However, this mechanism can be beneficial for all group members when one individual has greater knowledge about the benefits of the decision than other group members. Such decisions are reached during certain activity shifts within the population of bottlenose dolphins residing in Doubtful Sound, New Zealand. Behavioral signals are performed by one individual and seem to precipitate shifts in the behavior of the entire group: side flops are performed by males and initiate traveling bouts while upside-down lobtails are performed by females and terminate traveling bouts. However, these signals are not observed at all activity shifts. We find that while side flops were performed by males that have greater knowledge than other male group members, this was not the case for females performing upside-down lobtails. The reason for this could have been that a generally high knowledge about the optimal timing of travel terminations rendered it less important which individual female made the decision.
1206.4812
Gilles Wainrib
Mathieu Galtier, Gilles Wainrib
A biological gradient descent for prediction through a combination of STDP and homeostatic plasticity
36 pages, 3 figures
null
null
null
q-bio.NC cs.NE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identifying, formalizing and combining biological mechanisms which implement known brain functions, such as prediction, is a main aspect of current research in theoretical neuroscience. In this letter, the mechanisms of Spike Timing Dependent Plasticity (STDP) and homeostatic plasticity, combined in an original mathematical formalism, are shown to shape recurrent neural networks into predictors. Following a rigorous mathematical treatment, we prove that they implement the online gradient descent of a distance between the network activity and its stimuli. The convergence to an equilibrium, where the network can spontaneously reproduce or predict its stimuli, does not suffer from bifurcation issues usually encountered in learning in recurrent neural networks.
[ { "created": "Thu, 21 Jun 2012 09:13:39 GMT", "version": "v1" }, { "created": "Tue, 11 Jun 2013 16:11:01 GMT", "version": "v2" } ]
2013-06-12
[ [ "Galtier", "Mathieu", "" ], [ "Wainrib", "Gilles", "" ] ]
Identifying, formalizing and combining biological mechanisms which implement known brain functions, such as prediction, is a main aspect of current research in theoretical neuroscience. In this letter, the mechanisms of Spike Timing Dependent Plasticity (STDP) and homeostatic plasticity, combined in an original mathematical formalism, are shown to shape recurrent neural networks into predictors. Following a rigorous mathematical treatment, we prove that they implement the online gradient descent of a distance between the network activity and its stimuli. The convergence to an equilibrium, where the network can spontaneously reproduce or predict its stimuli, does not suffer from bifurcation issues usually encountered in learning in recurrent neural networks.
1005.5292
Luciano Teresi
Paola Nardinocchi, Luciano Teresi, Valerio Varano
Myocardial Contractions and the Ventricular Pressure--Volume Relationship
16 pages, 18 figures
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a reduced-order heart model with the aim of introducing a novel point of view in the interpretation of the pressure-volume loops. The novelty of the approach is based on the definition of active contraction as opposed to that of active stress. The consequences of the assumption are discussed with reference to a specific pressure-volume loop characteristic of a normal human patient.
[ { "created": "Fri, 28 May 2010 13:48:52 GMT", "version": "v1" } ]
2010-05-31
[ [ "Nardinocchi", "Paola", "" ], [ "Teresi", "Luciano", "" ], [ "Varano", "Valerio", "" ] ]
We present a reduced-order heart model with the aim of introducing a novel point of view in the interpretation of the pressure-volume loops. The novelty of the approach is based on the definition of active contraction as opposed to that of active stress. The consequences of the assumption are discussed with reference to a specific pressure-volume loop characteristic of a normal human patient.
0706.3681
Marcin Molski
Marcin Molski
On the Classification Scheme for Phenomenological Universalities in Growth Problems in Physics and Other Sciences
null
null
null
null
q-bio.OT q-bio.QM
null
Comment on "Classification Scheme for Phenomenological Universalities in Growth Problems in Physics and Other Sciences" by P. Castorina, P. P. Delsanto and C. Guiot, Phys. Rev. Lett. {\bf 96}, 188701 (2006) is presented. It has been proved that the West-like function of growth derived by the authors is incorrect and the approach does not take into account the growth of the biological systems undergoing atrophy or demographic and economic systems undergoing involution or regression. A simple extension of the model, which permits derivation of the so far unknown involuted Gompertz function of growth is proposed.
[ { "created": "Mon, 25 Jun 2007 17:40:10 GMT", "version": "v1" } ]
2007-06-26
[ [ "Molski", "Marcin", "" ] ]
Comment on "Classification Scheme for Phenomenological Universalities in Growth Problems in Physics and Other Sciences" by P. Castorina, P. P. Delsanto and C. Guiot, Phys. Rev. Lett. {\bf 96}, 188701 (2006) is presented. It has been proved that the West-like function of growth derived by the authors is incorrect and the approach does not take into account the growth of the biological systems undergoing atrophy or demographic and economic systems undergoing involution or regression. A simple extension of the model, which permits derivation of the so far unknown involuted Gompertz function of growth is proposed.
2111.11753
Flavia Feliciangeli
Hanan Dreiwi, Flavia Feliciangeli, Mario Castro, Grant Lythe, Carmen Molina-Par\'is, Mart\'in L\'opez-Garc\'ia
A stochastic model of cell proliferation and death across a sequence of compartments
22 pages, 8 figures
null
null
null
q-bio.CB math.DS
http://creativecommons.org/licenses/by/4.0/
Cells of the human body have nearly identical genome but exhibit very different phenotypes that allow them to carry out specific functions and react to changes in their surrounding environment. This division of labour is achieved by cellular division and cellular differentiation, events which lead to a population of cells with unique characteristics. In this paper, we model the dynamics of cells over time across a sequence of compartments. Cells within a compartment may represent being at the same spatial location or sharing the same phenotype. In this sequence of compartments, cells can either die, divide or enter an adjacent compartment. We analyse a set of ordinary differential equations to describe the evolution of the average number of cells in each compartment over time. We also focus on the progeny of a founder cell in terms of a stochastic process and analyse several summary statistics to bring insights into the lifespan of a single cell, the number of divisions during its lifespan, and the probability to die in each compartment. Numerical results inspired by cellular immune processes allow us to illustrate the applicability of our techniques.
[ { "created": "Tue, 23 Nov 2021 10:00:59 GMT", "version": "v1" } ]
2021-11-24
[ [ "Dreiwi", "Hanan", "" ], [ "Feliciangeli", "Flavia", "" ], [ "Castro", "Mario", "" ], [ "Lythe", "Grant", "" ], [ "Molina-París", "Carmen", "" ], [ "López-García", "Martín", "" ] ]
Cells of the human body have nearly identical genome but exhibit very different phenotypes that allow them to carry out specific functions and react to changes in their surrounding environment. This division of labour is achieved by cellular division and cellular differentiation, events which lead to a population of cells with unique characteristics. In this paper, we model the dynamics of cells over time across a sequence of compartments. Cells within a compartment may represent being at the same spatial location or sharing the same phenotype. In this sequence of compartments, cells can either die, divide or enter an adjacent compartment. We analyse a set of ordinary differential equations to describe the evolution of the average number of cells in each compartment over time. We also focus on the progeny of a founder cell in terms of a stochastic process and analyse several summary statistics to bring insights into the lifespan of a single cell, the number of divisions during its lifespan, and the probability to die in each compartment. Numerical results inspired by cellular immune processes allow us to illustrate the applicability of our techniques.
q-bio/0702012
Changbong Hyeon
Changbong Hyeon and Jose N. Onuchic
Internal strain regulates the nucleotide binding site of the kinesin leading head
34 pages, 9 Figures
PNAS (2007) vol 104, 2175-2180
10.1073/pnas.0610939104
null
q-bio.BM physics.bio-ph
null
In the presence of ATP, kinesin proceeds along the protofilament of microtubule by alternated binding of two motor domains on the tubulin binding sites. Since the processivity of kinesin is much higher than other motor proteins, it has been speculated that there exists a mechanism for allosteric regulation between the two monomers. Recent experiments suggest that ATP binding to the leading head domain in kinesin is regulated by the rearward strain built on the neck-linker. We test this hypothesis by explicitly modeling a $C_{\alpha}$-based kinesin structure whose both motor domains are bound on the tubulin binding sites. The equilibrium structures of kinesin on the microtubule show disordered and ordered neck-linker configurations for the leading and the trailing head, respectively. The comparison of the structures between the two heads shows that several native contacts present at the nucleotide binding site in the leading head are less intact than those in the binding site of the rear head. The network of native contacts obtained from this comparison provides the internal tension propagation pathway, which leads to the disruption of the nucleotide binding site in the leading head. Also, using an argument based on polymer theory, we estimate the internal tension built on the neck-linker to be f~(12-15) pN. Both of these conclusions support the experimental hypothesis.
[ { "created": "Wed, 7 Feb 2007 18:55:00 GMT", "version": "v1" } ]
2009-11-13
[ [ "Hyeon", "Changbong", "" ], [ "Onuchic", "Jose N.", "" ] ]
In the presence of ATP, kinesin proceeds along the protofilament of microtubule by alternated binding of two motor domains on the tubulin binding sites. Since the processivity of kinesin is much higher than other motor proteins, it has been speculated that there exists a mechanism for allosteric regulation between the two monomers. Recent experiments suggest that ATP binding to the leading head domain in kinesin is regulated by the rearward strain built on the neck-linker. We test this hypothesis by explicitly modeling a $C_{\alpha}$-based kinesin structure whose both motor domains are bound on the tubulin binding sites. The equilibrium structures of kinesin on the microtubule show disordered and ordered neck-linker configurations for the leading and the trailing head, respectively. The comparison of the structures between the two heads shows that several native contacts present at the nucleotide binding site in the leading head are less intact than those in the binding site of the rear head. The network of native contacts obtained from this comparison provides the internal tension propagation pathway, which leads to the disruption of the nucleotide binding site in the leading head. Also, using an argument based on polymer theory, we estimate the internal tension built on the neck-linker to be f~(12-15) pN. Both of these conclusions support the experimental hypothesis.
1809.02506
Mohammad Golbabaee
Arnold Julian Vinoj Benjamin, Pedro A. G\'omez, Mohammad Golbabaee, Tim Sprenger, Marion I. Menzel, Mike E. Davies, Ian Marshall
Balanced multi-shot EPI for accelerated Cartesian MRF: An alternative to spiral MRF
Proceedings of the Joint Annual Meeting ISMRM-ESMRMB 2018 - Paris
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The main purpose of this study is to show that a highly accelerated Cartesian MRF scheme using a multi-shot EPI readout (i.e. multi-shot EPI-MRF) can produce good quality multi-parametric maps such as T1, T2 and proton density (PD) in a sufficiently short scan duration that is similar to conventional MRF. This multi-shot approach allows considerable subsampling while traversing the entire k-space trajectory, can yield better SNR, reduced blurring, less distortion and can also be used to collect higher resolution data compared to existing single-shot EPI-MRF implementations. The generated parametric maps are compared to an accelerated spiral MRF implementation with the same acquisition parameters to evaluate the performance of this method. Additionally, an iterative reconstruction algorithm is applied to improve the accuracy of parametric map estimations and the fast convergence of EPI-MRF is also demonstrated.
[ { "created": "Thu, 6 Sep 2018 12:51:58 GMT", "version": "v1" } ]
2018-09-10
[ [ "Benjamin", "Arnold Julian Vinoj", "" ], [ "Gómez", "Pedro A.", "" ], [ "Golbabaee", "Mohammad", "" ], [ "Sprenger", "Tim", "" ], [ "Menzel", "Marion I.", "" ], [ "Davies", "Mike E.", "" ], [ "Marshall", "Ian", "" ] ]
The main purpose of this study is to show that a highly accelerated Cartesian MRF scheme using a multi-shot EPI readout (i.e. multi-shot EPI-MRF) can produce good quality multi-parametric maps such as T1, T2 and proton density (PD) in a sufficiently short scan duration that is similar to conventional MRF. This multi-shot approach allows considerable subsampling while traversing the entire k-space trajectory, can yield better SNR, reduced blurring, less distortion and can also be used to collect higher resolution data compared to existing single-shot EPI-MRF implementations. The generated parametric maps are compared to an accelerated spiral MRF implementation with the same acquisition parameters to evaluate the performance of this method. Additionally, an iterative reconstruction algorithm is applied to improve the accuracy of parametric map estimations and the fast convergence of EPI-MRF is also demonstrated.
2308.10645
Gerald Cooray PhD
Gerald K. Cooray, Vernon Cooray and Karl Friston
Canonical Cortical Field Theories
19 pages, 1 figure
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
We characterise the dynamics of neuronal activity, in terms of field theory, using neural units placed on a 2D-lattice modelling the cortical surface. The electrical activity of neuronal units was analysed with the aim of deriving a neural field model with a simple functional form that still able to predict or reproduce empirical findings. Each neural unit was modelled using a neural mass and the accompanying field theory was derived in the continuum limit. The field theory comprised coupled (real) Klein-Gordon fields, where predictions of the model fall within the range of experimental findings. These predictions included the frequency spectrum of electric activity measured from the cortex, which was derived using an equipartition of energy over eigenfunctions of the neural fields. Moreover, the neural field model was invariant, within a set of parameters, to the dynamical system used to model each neuronal mass. Specifically, topologically equivalent dynamical systems resulted in the same neural field model when connected in a lattice; indicating that the fields derived could be read as a canonical cortical field theory. We specifically investigated non-dispersive fields that provide a structure for the coding (or representation) of afferent information. Further elaboration of the ensuing neural field theory, including the effect of dispersive forces, could be of importance in the understanding of the cortical processing of information.
[ { "created": "Mon, 21 Aug 2023 11:34:05 GMT", "version": "v1" } ]
2023-08-22
[ [ "Cooray", "Gerald K.", "" ], [ "Cooray", "Vernon", "" ], [ "Friston", "Karl", "" ] ]
We characterise the dynamics of neuronal activity, in terms of field theory, using neural units placed on a 2D-lattice modelling the cortical surface. The electrical activity of neuronal units was analysed with the aim of deriving a neural field model with a simple functional form that still able to predict or reproduce empirical findings. Each neural unit was modelled using a neural mass and the accompanying field theory was derived in the continuum limit. The field theory comprised coupled (real) Klein-Gordon fields, where predictions of the model fall within the range of experimental findings. These predictions included the frequency spectrum of electric activity measured from the cortex, which was derived using an equipartition of energy over eigenfunctions of the neural fields. Moreover, the neural field model was invariant, within a set of parameters, to the dynamical system used to model each neuronal mass. Specifically, topologically equivalent dynamical systems resulted in the same neural field model when connected in a lattice; indicating that the fields derived could be read as a canonical cortical field theory. We specifically investigated non-dispersive fields that provide a structure for the coding (or representation) of afferent information. Further elaboration of the ensuing neural field theory, including the effect of dispersive forces, could be of importance in the understanding of the cortical processing of information.
1305.5103
Miloje Rakocevic M.
Miloje M. Rakocevic
Harmonic mean as a determinant of the genetic code
20 pages, 14 tables, 5 boxes, 12 footnotes, 4 appendices with 8 tables and 3 surveys; in Conclusion: the hypothesis about extraterrestrial life
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is shown that there is a sense in splitting Genetic Code Table (GCT) into three parts using the harmonic mean, calculated by the formula H (a, b) = 2ab / (a + b), where a = 63 and b = 31.5. Within these three parts, the amino acids (AAs) are positioned on the basis of the validity of the evident regularities of key parameters, such as polarity, hydrophobicity and enzyme-mediated amino acid classification. In addition, there are obvious balances of the number of atoms in the nucleotide triplets and corresponding amino acid groups and/or classes.
[ { "created": "Tue, 21 May 2013 18:04:17 GMT", "version": "v1" }, { "created": "Sat, 1 Jun 2013 18:51:31 GMT", "version": "v2" }, { "created": "Sat, 8 Jun 2013 08:44:04 GMT", "version": "v3" }, { "created": "Wed, 12 Jun 2013 10:36:23 GMT", "version": "v4" } ]
2013-06-13
[ [ "Rakocevic", "Miloje M.", "" ] ]
It is shown that there is a sense in splitting Genetic Code Table (GCT) into three parts using the harmonic mean, calculated by the formula H (a, b) = 2ab / (a + b), where a = 63 and b = 31.5. Within these three parts, the amino acids (AAs) are positioned on the basis of the validity of the evident regularities of key parameters, such as polarity, hydrophobicity and enzyme-mediated amino acid classification. In addition, there are obvious balances of the number of atoms in the nucleotide triplets and corresponding amino acid groups and/or classes.
1708.04168
Karlis Kanders
Karlis Kanders, Tom Lorimer, Yoko Uwate, Willi-Hans Steeb and Ruedi Stoop
Robust transformations of firing patterns for neural networks
null
null
null
null
q-bio.NC nlin.CD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As a promising computational paradigm, occurrence of critical states in artificial and biological neural networks has attracted wide-spread attention. An often-made explicit or implicit assumption is that one single critical state is responsible for two separate notions of criticality (avalanche criticality and dynamical edge of chaos criticality). Previously, we provided an isolated counter-example for co-occurrence. Here, we reveal a persistent paradigm of structural transitions that such networks undergo, as the overall connectivity strength is varied over its biologically meaningful range. Among these transitions, only one avalanche critical point emerges, with edge of chaos failing to co-occur. Our observations are based on ensembles of networks obtained from variations of network configuration and their neurons. This suggests that not only non-coincidence of criticality, but also the persistent paradigm of network structural changes in function of the overall connectivity strength, could be generic features of a large class of biological neural networks.
[ { "created": "Mon, 14 Aug 2017 15:11:04 GMT", "version": "v1" } ]
2017-08-15
[ [ "Kanders", "Karlis", "" ], [ "Lorimer", "Tom", "" ], [ "Uwate", "Yoko", "" ], [ "Steeb", "Willi-Hans", "" ], [ "Stoop", "Ruedi", "" ] ]
As a promising computational paradigm, occurrence of critical states in artificial and biological neural networks has attracted wide-spread attention. An often-made explicit or implicit assumption is that one single critical state is responsible for two separate notions of criticality (avalanche criticality and dynamical edge of chaos criticality). Previously, we provided an isolated counter-example for co-occurrence. Here, we reveal a persistent paradigm of structural transitions that such networks undergo, as the overall connectivity strength is varied over its biologically meaningful range. Among these transitions, only one avalanche critical point emerges, with edge of chaos failing to co-occur. Our observations are based on ensembles of networks obtained from variations of network configuration and their neurons. This suggests that not only non-coincidence of criticality, but also the persistent paradigm of network structural changes in function of the overall connectivity strength, could be generic features of a large class of biological neural networks.
2201.09654
Hal Sorbonne Universite Gestionnaire
F\'abio Carneiro (ICM), Dario Saracino (ARAMIS), Vincent Huin (LilNCog (ex-JPARC)), Fabienne Clot, C\'ecile Delorme, Aur\'elie M\'eneret (ICM), St\'ephane Thobois (CNC), Florence Cormier, Jean Christophe Corvol (ICM), Timoth\'ee Lenglet, Marie Vidailhet (ICM), Marie-Odile Habert (LIB), Audrey Gabelle (PSNREC), \'Emilie Beaufils (UT), Karl Mondon (UT), M\'elissa Tir, Daniela Andriuta, Alexis Brice (ICM), Vincent Deramecourt, Isabelle Le Ber (ICM)
Isolated parkinsonism is an atypical presentation of GRN and C9orf72 gene mutations
null
Parkinsonism and Related Disorders, Elsevier, 2020, 80, pp.73-81
10.1016/j.parkreldis.2020.09.019
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Introduction: A phenotype of isolated parkinsonism mimicking Idiopathic Parkinson's Disease (IPD) is a rare clinical presentation of GRN and C9orf72 mutations, the major genetic causes of frontotemporal dementia (FTD). It still remains controversial if this association is fortuitous or not, and which clinical clues could reliably suggest a genetic FTD etiology in IPD patients. This study aims to describe the clinical characteristics of FTD mutation carriers presenting with IPD phenotype, provide neuropathological evidence of the mutation's causality, and specifically address their "red flags" according to current IPD criteria. Methods: Seven GRN and C9orf72 carriers with isolated parkinsonism at onset, and three patients from the literature were included in this study. To allow better delineation of their phenotype, the presence of supportive, exclusion and "red flag" features from MDS criteria were analyzed for each case. Results: Amongst the ten patients (5 GRN, 5 C9orf72), seven fulfilled probable IPD criteria during all the disease course, while behavioral/language or motoneuron dysfunctions occurred later in three. Disease duration was longer and dopa-responsiveness was more sustained in C9orf72 than in GRN carriers. Subtle motor features, cognitive/behavioral changes, family history of dementia/ALS were suggestive clues for a genetic diagnosis. Importantly, neuropathological examination in one patient revealed typical TDP-43-inclusions without alpha-synucleinopathy, thus demonstrating the causal link between FTD mutations, TDP-43-pathology and PD phenotype. Conclusion: We showed that, altogether, family history of early-onset dementia/ALS, the presence of cognitive/behavioral dysfunction and subtle motor characteristics are atypical features frequently present in the parkinsonian presentations of GRN and C9orf72 mutations .
[ { "created": "Mon, 24 Jan 2022 13:18:40 GMT", "version": "v1" } ]
2022-01-25
[ [ "Carneiro", "Fábio", "", "ICM" ], [ "Saracino", "Dario", "", "ARAMIS" ], [ "Huin", "Vincent", "", "LilNCog" ], [ "Clot", "Fabienne", "", "ICM" ], [ "Delorme", "Cécile", "", "ICM" ], [ "Méneret", "Aurélie", "", "ICM" ], [ "Thobois", "Stéphane", "", "CNC" ], [ "Cormier", "Florence", "", "ICM" ], [ "Corvol", "Jean Christophe", "", "ICM" ], [ "Lenglet", "Timothée", "", "ICM" ], [ "Vidailhet", "Marie", "", "ICM" ], [ "Habert", "Marie-Odile", "", "LIB" ], [ "Gabelle", "Audrey", "", "PSNREC" ], [ "Beaufils", "Émilie", "", "UT" ], [ "Mondon", "Karl", "", "UT" ], [ "Tir", "Mélissa", "", "ICM" ], [ "Andriuta", "Daniela", "", "ICM" ], [ "Brice", "Alexis", "", "ICM" ], [ "Deramecourt", "Vincent", "", "ICM" ], [ "Ber", "Isabelle Le", "", "ICM" ] ]
Introduction: A phenotype of isolated parkinsonism mimicking Idiopathic Parkinson's Disease (IPD) is a rare clinical presentation of GRN and C9orf72 mutations, the major genetic causes of frontotemporal dementia (FTD). It still remains controversial if this association is fortuitous or not, and which clinical clues could reliably suggest a genetic FTD etiology in IPD patients. This study aims to describe the clinical characteristics of FTD mutation carriers presenting with IPD phenotype, provide neuropathological evidence of the mutation's causality, and specifically address their "red flags" according to current IPD criteria. Methods: Seven GRN and C9orf72 carriers with isolated parkinsonism at onset, and three patients from the literature were included in this study. To allow better delineation of their phenotype, the presence of supportive, exclusion and "red flag" features from MDS criteria were analyzed for each case. Results: Amongst the ten patients (5 GRN, 5 C9orf72), seven fulfilled probable IPD criteria during all the disease course, while behavioral/language or motoneuron dysfunctions occurred later in three. Disease duration was longer and dopa-responsiveness was more sustained in C9orf72 than in GRN carriers. Subtle motor features, cognitive/behavioral changes, family history of dementia/ALS were suggestive clues for a genetic diagnosis. Importantly, neuropathological examination in one patient revealed typical TDP-43-inclusions without alpha-synucleinopathy, thus demonstrating the causal link between FTD mutations, TDP-43-pathology and PD phenotype. Conclusion: We showed that, altogether, family history of early-onset dementia/ALS, the presence of cognitive/behavioral dysfunction and subtle motor characteristics are atypical features frequently present in the parkinsonian presentations of GRN and C9orf72 mutations .
1512.00810
Carsten Allefeld
Carsten Allefeld, Kai G\"orgen, John-Dylan Haynes
Valid population inference for information-based imaging: From the second-level $t$-test to prevalence inference
manuscript accepted by NeuroImage, plus minor fixes and a note added after publication
NeuroImage 141: 378-392, 2016
10.1016/j.neuroimage.2016.07.040
null
q-bio.NC stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In multivariate pattern analysis of neuroimaging data, 'second-level' inference is often performed by entering classification accuracies into a $t$-test vs chance level across subjects. We argue that while the random-effects analysis implemented by the $t$-test does provide population inference if applied to activation differences, it fails to do so in the case of classification accuracy or other 'information-like' measures, because the true value of such measures can never be below chance level. This constraint changes the meaning of the population-level null hypothesis being tested, which becomes equivalent to the global null hypothesis that there is no effect in any subject in the population. Consequently, rejecting it only allows to infer that there are some subjects in which there is an information effect, but not that it generalizes, rendering it effectively equivalent to fixed-effects analysis. This statement is supported by theoretical arguments as well as simulations. We review possible alternative approaches to population inference for information-based imaging, converging on the idea that it should not target the mean, but the prevalence of the effect in the population. One method to do so, 'permutation-based information prevalence inference using the minimum statistic', is described in detail and applied to empirical data.
[ { "created": "Wed, 2 Dec 2015 18:59:46 GMT", "version": "v1" }, { "created": "Wed, 1 Jun 2016 19:46:45 GMT", "version": "v2" }, { "created": "Wed, 10 Aug 2016 17:00:26 GMT", "version": "v3" } ]
2016-08-11
[ [ "Allefeld", "Carsten", "" ], [ "Görgen", "Kai", "" ], [ "Haynes", "John-Dylan", "" ] ]
In multivariate pattern analysis of neuroimaging data, 'second-level' inference is often performed by entering classification accuracies into a $t$-test vs chance level across subjects. We argue that while the random-effects analysis implemented by the $t$-test does provide population inference if applied to activation differences, it fails to do so in the case of classification accuracy or other 'information-like' measures, because the true value of such measures can never be below chance level. This constraint changes the meaning of the population-level null hypothesis being tested, which becomes equivalent to the global null hypothesis that there is no effect in any subject in the population. Consequently, rejecting it only allows to infer that there are some subjects in which there is an information effect, but not that it generalizes, rendering it effectively equivalent to fixed-effects analysis. This statement is supported by theoretical arguments as well as simulations. We review possible alternative approaches to population inference for information-based imaging, converging on the idea that it should not target the mean, but the prevalence of the effect in the population. One method to do so, 'permutation-based information prevalence inference using the minimum statistic', is described in detail and applied to empirical data.
0711.2058
Igor M. Suslov
I. M. Suslov (P.L.Kapitza Institute for Physical Problems, Moscow, Russia)
Computer Model of a "Sense of Humour". I. General Algorithm
10 pages, 3 figures included; continuation of this series to appear
Biofizika SSSR 37, 318 (1992) [Biophysics 37, 242 (1992)]
null
null
q-bio.NC cs.AI
null
A computer model of a "sense of humour" is proposed. The humorous effect is interpreted as a specific malfunction in the course of information processing due to the need for the rapid deletion of the false version transmitted into consciousness. The biological function of a sense of humour consists in speeding up the bringing of information into consciousness and in fuller use of the resources of the brain.
[ { "created": "Tue, 13 Nov 2007 19:00:32 GMT", "version": "v1" } ]
2007-11-27
[ [ "Suslov", "I. M.", "", "P.L.Kapitza Institute for Physical Problems, Moscow,\n Russia" ] ]
A computer model of a "sense of humour" is proposed. The humorous effect is interpreted as a specific malfunction in the course of information processing due to the need for the rapid deletion of the false version transmitted into consciousness. The biological function of a sense of humour consists in speeding up the bringing of information into consciousness and in fuller use of the resources of the brain.
0708.3869
Guy Katriel
Guy Katriel
Existence of periodic solutions for enzyme-catalysed reactions with periodic substrate input
null
Discrete & Continuous Dynamical Systems - Supplements, September 2007
null
null
q-bio.BM nlin.AO
null
Considering a basic enzyme-catalysed reaction, in which the rate of input of the substrate varies periodically in time, we give a necessary and sufficient condition for the existence of a periodic solution of the reaction equations. The proof employs the Leray-Schauder degree, applied to an appropriately constructed homotopy.
[ { "created": "Tue, 28 Aug 2007 23:18:05 GMT", "version": "v1" } ]
2011-11-10
[ [ "Katriel", "Guy", "" ] ]
Considering a basic enzyme-catalysed reaction, in which the rate of input of the substrate varies periodically in time, we give a necessary and sufficient condition for the existence of a periodic solution of the reaction equations. The proof employs the Leray-Schauder degree, applied to an appropriately constructed homotopy.
2008.08073
Jerome Feldman
Jerome A. Feldman (ICSI and UC Berkeley)
On the Evolution of Subjective Experience
49 pages 5 figures. This 7/22/2021 version preserves all the content of the previous version and adds additional discussion (in italics). It also includes several new references to connect with current literature. A companion arXiv article has also been updated
null
null
null
q-bio.NC cs.NE q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Subjective Experience (SE) is part of the ancient mind-body problem, which continues to be one of deepest mysteries of science. Despite major advances in many fields, there is still no plausible causal link between SE and its realization in the body. The core issue is the incompatibility of objective (3rd person) public science with subjective (1st person) private experience. Any scientific approach to SE assumes that it arose from extended evolutionary processes and that examining evolutionary history should help us understand it. While the core mystery remains, converging evidence from theoretical, experimental, and computational studies yields strong constraints on SE and some suggestions for further research. All animals confront many of the same fitness challenges. They all need some kind of internal model to relate their life goals and actionable sensed information to action. We understand the evolution of the bodily aspects of human perception and emotion, but not the SE. The first evolutionary evidence for SE appears in vertebrates and much of its neural substrate and simulation mechanism is preserved in mammals and humans. People exhibit the same phenomena, but there are remaining mysteries of everyday experience that are demonstrably incompatible with current neuroscience. In spite of this limitation, there is considerable progress on understanding the role of SE in the success of prostheses.
[ { "created": "Tue, 18 Aug 2020 17:54:39 GMT", "version": "v1" }, { "created": "Fri, 23 Jul 2021 17:41:53 GMT", "version": "v2" }, { "created": "Fri, 25 Mar 2022 18:16:35 GMT", "version": "v3" } ]
2022-03-29
[ [ "Feldman", "Jerome A.", "", "ICSI and UC Berkeley" ] ]
Subjective Experience (SE) is part of the ancient mind-body problem, which continues to be one of deepest mysteries of science. Despite major advances in many fields, there is still no plausible causal link between SE and its realization in the body. The core issue is the incompatibility of objective (3rd person) public science with subjective (1st person) private experience. Any scientific approach to SE assumes that it arose from extended evolutionary processes and that examining evolutionary history should help us understand it. While the core mystery remains, converging evidence from theoretical, experimental, and computational studies yields strong constraints on SE and some suggestions for further research. All animals confront many of the same fitness challenges. They all need some kind of internal model to relate their life goals and actionable sensed information to action. We understand the evolution of the bodily aspects of human perception and emotion, but not the SE. The first evolutionary evidence for SE appears in vertebrates and much of its neural substrate and simulation mechanism is preserved in mammals and humans. People exhibit the same phenomena, but there are remaining mysteries of everyday experience that are demonstrably incompatible with current neuroscience. In spite of this limitation, there is considerable progress on understanding the role of SE in the success of prostheses.
1302.4111
Thomas R. Weikl
Thomas R. Weikl and Bahram Hemmateenejad
How conformational changes can affect catalysis, inhibition and drug resistance of enzymes with induced-fit binding mechanism such as the HIV-1 protease
9 pages, 2 figures, 3 tables; to appear in "BBA Proteins and Proteomics" as part of a special issue with the title "The emerging dynamic view of proteins: Protein plasticity in allostery, evolution and self-assembly."
null
10.1016/j.bbapap.2013.01.027
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A central question is how the conformational changes of proteins affect their function and the inhibition of this function by drug molecules. Many enzymes change from an open to a closed conformation upon binding of substrate or inhibitor molecules. These conformational changes have been suggested to follow an induced-fit mechanism in which the molecules first bind in the open conformation in those cases where binding in the closed conformation appears to be sterically obstructed such as for the HIV-1 protease. In this article, we present a general model for the catalysis and inhibition of enzymes with induced-fit binding mechanism. We derive general expressions that specify how the overall catalytic rate of the enzymes depends on the rates for binding, for the conformational changes, and for the chemical reaction. Based on these expressions, we analyze the effect of mutations that mainly shift the conformational equilibrium on catalysis and inhibition. If the overall catalytic rate is limited by product unbinding, we find that mutations that destabilize the closed conformation relative to the open conformation increase the catalytic rate in the presence of inhibitors by a factor exp(ddG/RT) where ddG is the mutation-induced shift of the free-energy difference between the conformations. This increase in the catalytic rate due to changes in the conformational equilibrium is independent of the inhibitor molecule and, thus, may help to understand how non-active-site mutations can contribute to the multi-drug-resistance that has been observed for the HIV-1 protease. A comparison to experimental data for the non-active-site mutation L90M of the HIV-1 protease indicates that the mutation slightly destabilizes the closed conformation of the enzyme.
[ { "created": "Sun, 17 Feb 2013 19:41:07 GMT", "version": "v1" } ]
2013-02-19
[ [ "Weikl", "Thomas R.", "" ], [ "Hemmateenejad", "Bahram", "" ] ]
A central question is how the conformational changes of proteins affect their function and the inhibition of this function by drug molecules. Many enzymes change from an open to a closed conformation upon binding of substrate or inhibitor molecules. These conformational changes have been suggested to follow an induced-fit mechanism in which the molecules first bind in the open conformation in those cases where binding in the closed conformation appears to be sterically obstructed such as for the HIV-1 protease. In this article, we present a general model for the catalysis and inhibition of enzymes with induced-fit binding mechanism. We derive general expressions that specify how the overall catalytic rate of the enzymes depends on the rates for binding, for the conformational changes, and for the chemical reaction. Based on these expressions, we analyze the effect of mutations that mainly shift the conformational equilibrium on catalysis and inhibition. If the overall catalytic rate is limited by product unbinding, we find that mutations that destabilize the closed conformation relative to the open conformation increase the catalytic rate in the presence of inhibitors by a factor exp(ddG/RT) where ddG is the mutation-induced shift of the free-energy difference between the conformations. This increase in the catalytic rate due to changes in the conformational equilibrium is independent of the inhibitor molecule and, thus, may help to understand how non-active-site mutations can contribute to the multi-drug-resistance that has been observed for the HIV-1 protease. A comparison to experimental data for the non-active-site mutation L90M of the HIV-1 protease indicates that the mutation slightly destabilizes the closed conformation of the enzyme.
0711.4724
Yannick Brohard
Laurence Gaume (AMAP), Yo\"el Forterre (IUSTI)
A viscoelastic deadly fluid in carnivorous pitcher plants
null
PLoS ONE 2, 11 (2007) on-line
10.1063/1.2964772
A-07-32
q-bio.PE
null
Background : The carnivorous plants of the genus Nepenthes, widely distributed in the Asian tropics, rely mostly on nutrients derived from arthropods trapped in their pitcher-shaped leaves and digested by their enzymatic fluid. The genus exhibits a great diversity of prey and pitcher forms and its mechanism of trapping has long intrigued scientists. The slippery inner surfaces of the pitchers, which can be waxy or highly wettable, have so far been considered as the key trapping devices. However, the occurrence of species lacking such epidermal specializations but still effective at trapping insects suggests the possible implication of other mechanisms. Methodology/Principal Findings : Using a combination of insect bioassays, high-speed video and rheological measurements, we show that the digestive fluid of Nepenthes rafflesiana is highly viscoelastic and that this physical property is crucial for the retention of insects in its traps. Trapping efficiency is shown to remain strong even when the fluid is highly diluted by water, as long as the elastic relaxation time of the fluid is higher than the typical time scale of insect movements. Conclusions/Significance : This finding challenges the common classification of Nepenthes pitchers as simple passive traps and is of great adaptive significance for these tropical plants, which are often submitted to high rainfalls and variations in fluid concentration. The viscoelastic trap constitutes a cryptic but potentially widespread adaptation of Nepenthes species and could be a homologous trait shared through common ancestry with the sundew (Drosera) flypaper plants. Such large production of a highly viscoelastic biopolymer fluid in permanent pools is nevertheless unique in the plant kingdom and suggests novel applications for pest control.
[ { "created": "Thu, 29 Nov 2007 14:07:15 GMT", "version": "v1" } ]
2009-11-13
[ [ "Gaume", "Laurence", "", "AMAP" ], [ "Forterre", "Yoël", "", "IUSTI" ] ]
Background : The carnivorous plants of the genus Nepenthes, widely distributed in the Asian tropics, rely mostly on nutrients derived from arthropods trapped in their pitcher-shaped leaves and digested by their enzymatic fluid. The genus exhibits a great diversity of prey and pitcher forms and its mechanism of trapping has long intrigued scientists. The slippery inner surfaces of the pitchers, which can be waxy or highly wettable, have so far been considered as the key trapping devices. However, the occurrence of species lacking such epidermal specializations but still effective at trapping insects suggests the possible implication of other mechanisms. Methodology/Principal Findings : Using a combination of insect bioassays, high-speed video and rheological measurements, we show that the digestive fluid of Nepenthes rafflesiana is highly viscoelastic and that this physical property is crucial for the retention of insects in its traps. Trapping efficiency is shown to remain strong even when the fluid is highly diluted by water, as long as the elastic relaxation time of the fluid is higher than the typical time scale of insect movements. Conclusions/Significance : This finding challenges the common classification of Nepenthes pitchers as simple passive traps and is of great adaptive significance for these tropical plants, which are often submitted to high rainfalls and variations in fluid concentration. The viscoelastic trap constitutes a cryptic but potentially widespread adaptation of Nepenthes species and could be a homologous trait shared through common ancestry with the sundew (Drosera) flypaper plants. Such large production of a highly viscoelastic biopolymer fluid in permanent pools is nevertheless unique in the plant kingdom and suggests novel applications for pest control.
1710.11569
Adam Noel
Adam Noel, Shayan Monabbati, Dimitrios Makrakis, Andrew W. Eckford
Timing Control of Single Neuron Spikes with Optogenetic Stimulation
6 pages, 8 figures, 3 tables. To be presented at the 2018 IEEE International Conference on Communications (IEEE ICC 2018) in May 2018
null
10.1109/ICC.2018.8422667
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper predicts the ability to externally control the firing times of a cortical neuron whose behavior follows the Izhikevich neuron model. The Izhikevich neuron model provides an efficient and biologically plausible method to track a cortical neuron's membrane potential and its firing times. The external control is a simple optogenetic model represented by a constant current source that can be turned on or off. This paper considers a firing frequency that is sufficiently low for the membrane potential to return to its resting potential after it fires. The time required for the neuron to charge and for the neuron to recover to the resting potential are fitted to functions of the Izhikevich neuron model parameters. Results show that linear functions of the model parameters can be used to predict the charging times with some accuracy and are sufficient to estimate the highest firing frequency achievable without interspike interference.
[ { "created": "Tue, 31 Oct 2017 16:33:45 GMT", "version": "v1" }, { "created": "Tue, 13 Feb 2018 08:29:49 GMT", "version": "v2" } ]
2018-09-06
[ [ "Noel", "Adam", "" ], [ "Monabbati", "Shayan", "" ], [ "Makrakis", "Dimitrios", "" ], [ "Eckford", "Andrew W.", "" ] ]
This paper predicts the ability to externally control the firing times of a cortical neuron whose behavior follows the Izhikevich neuron model. The Izhikevich neuron model provides an efficient and biologically plausible method to track a cortical neuron's membrane potential and its firing times. The external control is a simple optogenetic model represented by a constant current source that can be turned on or off. This paper considers a firing frequency that is sufficiently low for the membrane potential to return to its resting potential after it fires. The time required for the neuron to charge and for the neuron to recover to the resting potential are fitted to functions of the Izhikevich neuron model parameters. Results show that linear functions of the model parameters can be used to predict the charging times with some accuracy and are sufficient to estimate the highest firing frequency achievable without interspike interference.
1503.05140
Ehsaneddin Asgari
Ehsaneddin Asgari and Mohammad R.K. Mofrad
ProtVec: A Continuous Distributed Representation of Biological Sequences
null
PLoS ONE 10(11): e0141287, 2015
10.1371/journal.pone.0141287
null
q-bio.QM cs.AI cs.LG q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a new representation and feature extraction method for biological sequences. Named bio-vectors (BioVec) to refer to biological sequences in general with protein-vectors (ProtVec) for proteins (amino-acid sequences) and gene-vectors (GeneVec) for gene sequences, this representation can be widely used in applications of deep learning in proteomics and genomics. In the present paper, we focus on protein-vectors that can be utilized in a wide array of bioinformatics investigations such as family classification, protein visualization, structure prediction, disordered protein identification, and protein-protein interaction prediction. In this method, we adopt artificial neural network approaches and represent a protein sequence with a single dense n-dimensional vector. To evaluate this method, we apply it in classification of 324,018 protein sequences obtained from Swiss-Prot belonging to 7,027 protein families, where an average family classification accuracy of 93%+-0.06% is obtained, outperforming existing family classification methods. In addition, we use ProtVec representation to predict disordered proteins from structured proteins. Two databases of disordered sequences are used: the DisProt database as well as a database featuring the disordered regions of nucleoporins rich with phenylalanine-glycine repeats (FG-Nups). Using support vector machine classifiers, FG-Nup sequences are distinguished from structured protein sequences found in Protein Data Bank (PDB) with a 99.8% accuracy, and unstructured DisProt sequences are differentiated from structured DisProt sequences with 100.0% accuracy. These results indicate that by only providing sequence data for various proteins into this model, accurate information about protein structure can be determined.
[ { "created": "Tue, 17 Mar 2015 17:55:22 GMT", "version": "v1" }, { "created": "Thu, 26 May 2016 20:17:51 GMT", "version": "v2" } ]
2016-05-30
[ [ "Asgari", "Ehsaneddin", "" ], [ "Mofrad", "Mohammad R. K.", "" ] ]
We introduce a new representation and feature extraction method for biological sequences. Named bio-vectors (BioVec) to refer to biological sequences in general with protein-vectors (ProtVec) for proteins (amino-acid sequences) and gene-vectors (GeneVec) for gene sequences, this representation can be widely used in applications of deep learning in proteomics and genomics. In the present paper, we focus on protein-vectors that can be utilized in a wide array of bioinformatics investigations such as family classification, protein visualization, structure prediction, disordered protein identification, and protein-protein interaction prediction. In this method, we adopt artificial neural network approaches and represent a protein sequence with a single dense n-dimensional vector. To evaluate this method, we apply it in classification of 324,018 protein sequences obtained from Swiss-Prot belonging to 7,027 protein families, where an average family classification accuracy of 93%+-0.06% is obtained, outperforming existing family classification methods. In addition, we use ProtVec representation to predict disordered proteins from structured proteins. Two databases of disordered sequences are used: the DisProt database as well as a database featuring the disordered regions of nucleoporins rich with phenylalanine-glycine repeats (FG-Nups). Using support vector machine classifiers, FG-Nup sequences are distinguished from structured protein sequences found in Protein Data Bank (PDB) with a 99.8% accuracy, and unstructured DisProt sequences are differentiated from structured DisProt sequences with 100.0% accuracy. These results indicate that by only providing sequence data for various proteins into this model, accurate information about protein structure can be determined.
2207.12914
Miguel Aguilera
Miguel Aguilera, \'Angel Poc-L\'opez, Conor Heins, Christopher L. Buckley
Knitting a Markov blanket is hard when you are out-of-equilibrium: two examples in canonical nonequilibrium models
null
null
null
null
q-bio.NC cond-mat.dis-nn nlin.CD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bayesian theories of biological and brain function speculate that Markov blankets (a conditional independence separating a system from external states) play a key role for facilitating inference-like behaviour in living systems. Although it has been suggested that Markov blankets are commonplace in sparsely connected, nonequilibrium complex systems, this has not been studied in detail. Here, we show in two different examples (a pair of coupled Lorenz systems and a nonequilibrium Ising model) that sparse connectivity does not guarantee Markov blankets in the steady-state density of nonequilibrium systems. Conversely, in the nonequilibrium Ising model explored, the more distant from equilibrium the system appears to be correlated with the distance from displaying a Markov blanket. These result suggests that further assumptions might be needed in order to assume the presence of Markov blankets in the kind of nonequilibrium processes describing the activity of living systems.
[ { "created": "Tue, 26 Jul 2022 14:06:37 GMT", "version": "v1" } ]
2022-07-27
[ [ "Aguilera", "Miguel", "" ], [ "Poc-López", "Ángel", "" ], [ "Heins", "Conor", "" ], [ "Buckley", "Christopher L.", "" ] ]
Bayesian theories of biological and brain function speculate that Markov blankets (a conditional independence separating a system from external states) play a key role for facilitating inference-like behaviour in living systems. Although it has been suggested that Markov blankets are commonplace in sparsely connected, nonequilibrium complex systems, this has not been studied in detail. Here, we show in two different examples (a pair of coupled Lorenz systems and a nonequilibrium Ising model) that sparse connectivity does not guarantee Markov blankets in the steady-state density of nonequilibrium systems. Conversely, in the nonequilibrium Ising model explored, the more distant from equilibrium the system appears to be correlated with the distance from displaying a Markov blanket. These result suggests that further assumptions might be needed in order to assume the presence of Markov blankets in the kind of nonequilibrium processes describing the activity of living systems.
q-bio/0605007
Enrico Carlon
T. Heim, J. Klein Wolterink, E. Carlon, G. T. Barkema
Effective affinities in microarray data
8 pages, 6 figures
J. Phys.: Condens. Matter 18, S525(2006)
10.1088/0953-8984/18/18/S03
null
q-bio.BM cond-mat.soft physics.chem-ph
null
In the past couple of years several studies have shown that hybridization in Affymetrix DNA microarrays can be rather well understood on the basis of simple models of physical chemistry. In the majority of the cases a Langmuir isotherm was used to fit experimental data. Although there is a general consensus about this approach, some discrepancies between different studies are evident. For instance, some authors have fitted the hybridization affinities from the microarray fluorescent intensities, while others used affinities obtained from melting experiments in solution. The former approach yields fitted affinities that at first sight are only partially consistent with solution values. In this paper we show that this discrepancy exists only superficially: a sufficiently complete model provides effective affinities which are fully consistent with those fitted to experimental data. This link provides new insight on the relevant processes underlying the functioning of DNA microarrays.
[ { "created": "Thu, 4 May 2006 09:18:07 GMT", "version": "v1" } ]
2007-05-23
[ [ "Heim", "T.", "" ], [ "Wolterink", "J. Klein", "" ], [ "Carlon", "E.", "" ], [ "Barkema", "G. T.", "" ] ]
In the past couple of years several studies have shown that hybridization in Affymetrix DNA microarrays can be rather well understood on the basis of simple models of physical chemistry. In the majority of the cases a Langmuir isotherm was used to fit experimental data. Although there is a general consensus about this approach, some discrepancies between different studies are evident. For instance, some authors have fitted the hybridization affinities from the microarray fluorescent intensities, while others used affinities obtained from melting experiments in solution. The former approach yields fitted affinities that at first sight are only partially consistent with solution values. In this paper we show that this discrepancy exists only superficially: a sufficiently complete model provides effective affinities which are fully consistent with those fitted to experimental data. This link provides new insight on the relevant processes underlying the functioning of DNA microarrays.
2009.02707
Bryan M. Li
Bryan M. Li, Theoklitos Amvrosiadis, Nathalie Rochefort, Arno Onken
Synthesising Realistic Calcium Traces of Neuronal Populations Using GAN
null
null
null
null
q-bio.NC cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Calcium imaging has become a powerful and popular technique to monitor the activity of large populations of neurons in vivo. However, for ethical considerations and despite recent technical developments, recordings are still constrained to a limited number of trials and animals. This limits the amount of data available from individual experiments and hinders the development of analysis techniques and models for more realistic sizes of neuronal populations. The ability to artificially synthesize realistic neuronal calcium signals could greatly alleviate this problem by scaling up the number of trials. Here, we propose a Generative Adversarial Network (GAN) model to generate realistic calcium signals as seen in neuronal somata with calcium imaging. To this end, we propose CalciumGAN, a model based on the WaveGAN architecture and train it on calcium fluorescent signals with the Wasserstein distance. We test the model on artificial data with known ground-truth and show that the distribution of the generated signals closely resembles the underlying data distribution. Then, we train the model on real calcium traces recorded from the primary visual cortex of behaving mice and confirm that the deconvolved spike trains match the statistics of the recorded data. Together, these results demonstrate that our model can successfully generate realistic calcium traces, thereby providing the means to augment existing datasets of neuronal activity for enhanced data exploration and modelling.
[ { "created": "Sun, 6 Sep 2020 10:58:11 GMT", "version": "v1" }, { "created": "Tue, 8 Sep 2020 03:58:43 GMT", "version": "v2" }, { "created": "Sat, 4 Feb 2023 11:40:48 GMT", "version": "v3" } ]
2023-02-07
[ [ "Li", "Bryan M.", "" ], [ "Amvrosiadis", "Theoklitos", "" ], [ "Rochefort", "Nathalie", "" ], [ "Onken", "Arno", "" ] ]
Calcium imaging has become a powerful and popular technique to monitor the activity of large populations of neurons in vivo. However, for ethical considerations and despite recent technical developments, recordings are still constrained to a limited number of trials and animals. This limits the amount of data available from individual experiments and hinders the development of analysis techniques and models for more realistic sizes of neuronal populations. The ability to artificially synthesize realistic neuronal calcium signals could greatly alleviate this problem by scaling up the number of trials. Here, we propose a Generative Adversarial Network (GAN) model to generate realistic calcium signals as seen in neuronal somata with calcium imaging. To this end, we propose CalciumGAN, a model based on the WaveGAN architecture and train it on calcium fluorescent signals with the Wasserstein distance. We test the model on artificial data with known ground-truth and show that the distribution of the generated signals closely resembles the underlying data distribution. Then, we train the model on real calcium traces recorded from the primary visual cortex of behaving mice and confirm that the deconvolved spike trains match the statistics of the recorded data. Together, these results demonstrate that our model can successfully generate realistic calcium traces, thereby providing the means to augment existing datasets of neuronal activity for enhanced data exploration and modelling.
0905.3297
Armen Allahverdyan
Armen E. Allahverdyan and Chin-Kun Hu
Replicators in Fine-grained Environment: Adaptation and Polymorphism
4 pages, 2 figures
Phys. Rev. Lett., 102, 058102 (2009)
10.1103/PhysRevLett.102.058102
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Selection in a time-periodic environment is modeled via the two-player replicator dynamics. For sufficiently fast environmental changes, this is reduced to a multi-player replicator dynamics in a constant environment. The two-player terms correspond to the time-averaged payoffs, while the three and four-player terms arise from the adaptation of the morphs to their varying environment. Such multi-player (adaptive) terms can induce a stable polymorphism. The establishment of the polymorphism in partnership games [genetic selection] is accompanied by decreasing mean fitness of the population.
[ { "created": "Wed, 20 May 2009 12:46:56 GMT", "version": "v1" } ]
2015-05-13
[ [ "Allahverdyan", "Armen E.", "" ], [ "Hu", "Chin-Kun", "" ] ]
Selection in a time-periodic environment is modeled via the two-player replicator dynamics. For sufficiently fast environmental changes, this is reduced to a multi-player replicator dynamics in a constant environment. The two-player terms correspond to the time-averaged payoffs, while the three and four-player terms arise from the adaptation of the morphs to their varying environment. Such multi-player (adaptive) terms can induce a stable polymorphism. The establishment of the polymorphism in partnership games [genetic selection] is accompanied by decreasing mean fitness of the population.
q-bio/0405021
Christel Kamp
Christel Kamp, Kim Christensen
Spectral Analysis of Protein-Protein Interactions in Drosophila melanogaster
9 pages RevTeX including 8 figures
null
10.1103/PhysRevE.71.041911
null
q-bio.MN cond-mat.stat-mech
null
Within a case study on the protein-protein interaction network (PIN) of Drosophila melanogaster we investigate the relation between the network's spectral properties and its structural features such as the prevalence of specific subgraphs or duplicate nodes as a result of its evolutionary history. The discrete part of the spectral density shows fingerprints of the PIN's topological features including a preference for loop structures. Duplicate nodes are another prominent feature of PINs and we discuss their representation in the PIN's spectrum as well as their biological implications.
[ { "created": "Wed, 26 May 2004 20:21:53 GMT", "version": "v1" } ]
2009-11-10
[ [ "Kamp", "Christel", "" ], [ "Christensen", "Kim", "" ] ]
Within a case study on the protein-protein interaction network (PIN) of Drosophila melanogaster we investigate the relation between the network's spectral properties and its structural features such as the prevalence of specific subgraphs or duplicate nodes as a result of its evolutionary history. The discrete part of the spectral density shows fingerprints of the PIN's topological features including a preference for loop structures. Duplicate nodes are another prominent feature of PINs and we discuss their representation in the PIN's spectrum as well as their biological implications.
1107.3112
Michael Courtney
Joshua Daviscourt, Joshua Huertas, and Michael Courtney
An Assessment of Weight-Length Relationships for Muskellunge,Northern Pike, and Chain Pickerel In Carlander's Handbook of Freshwater Fishery Biology
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Carlander's Handbook of Freshwater Fishery Biology (1969) contains life history data from many species of freshwater fish found in North America. It has been cited over 1200 times and used to produce standard-weight curves for some species. Recent work (Cole-Fletcher et al. 2011) suggests Carlander (1969) contains numerous errors in listed weight-length equations. This paper assesses the weight-length relationships listed in Carlander for muskellunge, northern pike, and chain pickerel by comparing graphs of the weight vs. length equations with other data listed and with standard weight curves published by independent sources. A number of discrepancies are identified through this analysis and new weight-length relationships are produced from listed data.
[ { "created": "Fri, 15 Jul 2011 17:26:24 GMT", "version": "v1" } ]
2011-07-18
[ [ "Daviscourt", "Joshua", "" ], [ "Huertas", "Joshua", "" ], [ "Courtney", "Michael", "" ] ]
Carlander's Handbook of Freshwater Fishery Biology (1969) contains life history data from many species of freshwater fish found in North America. It has been cited over 1200 times and used to produce standard-weight curves for some species. Recent work (Cole-Fletcher et al. 2011) suggests Carlander (1969) contains numerous errors in listed weight-length equations. This paper assesses the weight-length relationships listed in Carlander for muskellunge, northern pike, and chain pickerel by comparing graphs of the weight vs. length equations with other data listed and with standard weight curves published by independent sources. A number of discrepancies are identified through this analysis and new weight-length relationships are produced from listed data.
1609.00779
Hongyu Miao
Shupeng Gui, Rui Chen, Liang Wu, Ji Liu, Hongyu Miao
A Scalable Algorithm for Structure Identification of Complex Gene Regulatory Network from Temporal Expression Data
14 pages, 2 figures, 2 tables
BMC Bioinformatics 2017 18:74
10.1186/s12859-017-1489-z
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Gene regulatory interactions are of fundamental importance to various biological functions and processes. However, only a few previous computational studies have claimed success in revealing genome-wide regulatory landscapes from temporal gene expression data, especially for complex eukaryotes like human. Moreover, recent work suggests that these methods still suffer from the curse of dimensionality if network size increases to 100 or higher. Result: We present a novel scalable algorithm for identifying genome-wide regulatory network structures. The highlight of our method is that its superior performance does not degenerate even for a network size on the order of $10^4$, and is thus readily applicable to large-scale complex networks. Such a breakthrough is achieved by considering both prior biological knowledge and multiple topological properties (i.e., sparsity and hub gene structure) of complex networks in the regularized formulation. We also illustrate the application of our algorithm in practice using the time-course expression data from an influenza infection study in respiratory epithelial cells. Availability and Implementation: The algorithm described in this article is implemented in MATLAB$^\circledR$. The source code is freely available from https://github.com/Hongyu-Miao/DMI.git. Contact: jliu@cs.rochester.edu; hongyu.miao@uth.tmc.edu Supplementary information: Supplementary data are available online.
[ { "created": "Sat, 3 Sep 2016 01:52:19 GMT", "version": "v1" }, { "created": "Wed, 8 Feb 2017 15:22:24 GMT", "version": "v2" } ]
2017-02-09
[ [ "Gui", "Shupeng", "" ], [ "Chen", "Rui", "" ], [ "Wu", "Liang", "" ], [ "Liu", "Ji", "" ], [ "Miao", "Hongyu", "" ] ]
Motivation: Gene regulatory interactions are of fundamental importance to various biological functions and processes. However, only a few previous computational studies have claimed success in revealing genome-wide regulatory landscapes from temporal gene expression data, especially for complex eukaryotes like human. Moreover, recent work suggests that these methods still suffer from the curse of dimensionality if network size increases to 100 or higher. Result: We present a novel scalable algorithm for identifying genome-wide regulatory network structures. The highlight of our method is that its superior performance does not degenerate even for a network size on the order of $10^4$, and is thus readily applicable to large-scale complex networks. Such a breakthrough is achieved by considering both prior biological knowledge and multiple topological properties (i.e., sparsity and hub gene structure) of complex networks in the regularized formulation. We also illustrate the application of our algorithm in practice using the time-course expression data from an influenza infection study in respiratory epithelial cells. Availability and Implementation: The algorithm described in this article is implemented in MATLAB$^\circledR$. The source code is freely available from https://github.com/Hongyu-Miao/DMI.git. Contact: jliu@cs.rochester.edu; hongyu.miao@uth.tmc.edu Supplementary information: Supplementary data are available online.
2201.07233
Michelle Adams
Berk C. Ugurdag, Serena Akt\"urk, Michelle Adams
Meta-analysis for Discovering Which Genes are Differentially Expressed in Neuroinflammation
6 pages, 2 tables
null
null
null
q-bio.OT
http://creativecommons.org/publicdomain/zero/1.0/
Neuroinflammation is a significant aspect of many neurological diseases of Homo sapiens, and the genes that are differentially expressed in this process should be well understood to gather the nature of such diseases. We have conducted a meta-analysis (based on a combined adjusted P value and logFC scheme) of 6 multi-species (Homo sapiens, Mus musculus) datasets (available on GEO, short for Gene Expression Omnibus) obtained through microarray technology. Our analysis shows that the genes coding pleckstrin homology domain and galectin-9 proteins take part in neuroinflammation in microglia.
[ { "created": "Tue, 18 Jan 2022 18:14:02 GMT", "version": "v1" } ]
2022-01-20
[ [ "Ugurdag", "Berk C.", "" ], [ "Aktürk", "Serena", "" ], [ "Adams", "Michelle", "" ] ]
Neuroinflammation is a significant aspect of many neurological diseases of Homo sapiens, and the genes that are differentially expressed in this process should be well understood to gather the nature of such diseases. We have conducted a meta-analysis (based on a combined adjusted P value and logFC scheme) of 6 multi-species (Homo sapiens, Mus musculus) datasets (available on GEO, short for Gene Expression Omnibus) obtained through microarray technology. Our analysis shows that the genes coding pleckstrin homology domain and galectin-9 proteins take part in neuroinflammation in microglia.
2007.03902
Paul Hurtado
Paul J. Hurtado and Cameron Richards
Building Mean Field State Transition Models Using The Generalized Linear Chain Trick and Continuous Time Markov Chain Theory
27 pages, 4 figures, 2 ancillary files (R code for two figures). Journal of Biological Dynamics (2021)
null
10.1080/17513758.2021.1912418
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The well-known Linear Chain Trick (LCT) allows modelers to derive mean field ODEs that assume gamma (Erlang) distributed passage times, by transitioning individuals sequentially through a chain of sub-states. The time spent in these states is the sum of $k$ exponentially distributed random variables, and is thus gamma (Erlang) distributed. The Generalized Linear Chain Trick (GLCT) extends this technique to the much broader phase-type family of distributions, which includes exponential, Erlang, hypoexponential, and Coxian distributions. Intuitively, phase-type distributions are the absorption time distributions for continuous time Markov chains (CTMCs). Here we review CTMCs and phase-type distributions, then illustrate how to use the GLCT to efficiently build mean field ODE models from underlying stochastic model assumptions. We generalize the Rosenzweig-MacArthur and SEIR models and show the benefits of using the GLCT to compute numerical solutions. These results highlight some practical benefits, and the intuitive nature, of using the GLCT to derive ODE models from first principles.
[ { "created": "Wed, 8 Jul 2020 05:02:42 GMT", "version": "v1" } ]
2021-05-25
[ [ "Hurtado", "Paul J.", "" ], [ "Richards", "Cameron", "" ] ]
The well-known Linear Chain Trick (LCT) allows modelers to derive mean field ODEs that assume gamma (Erlang) distributed passage times, by transitioning individuals sequentially through a chain of sub-states. The time spent in these states is the sum of $k$ exponentially distributed random variables, and is thus gamma (Erlang) distributed. The Generalized Linear Chain Trick (GLCT) extends this technique to the much broader phase-type family of distributions, which includes exponential, Erlang, hypoexponential, and Coxian distributions. Intuitively, phase-type distributions are the absorption time distributions for continuous time Markov chains (CTMCs). Here we review CTMCs and phase-type distributions, then illustrate how to use the GLCT to efficiently build mean field ODE models from underlying stochastic model assumptions. We generalize the Rosenzweig-MacArthur and SEIR models and show the benefits of using the GLCT to compute numerical solutions. These results highlight some practical benefits, and the intuitive nature, of using the GLCT to derive ODE models from first principles.
2010.01914
Patrick Krauss
Patrick Krauss and Achim Schilling
Towards a Cognitive Computational Neuroscience of Auditory Phantom Perceptions
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to gain a mechanistic understanding of how tinnitus emerges in the brain, we must build biologically plausible computational models that mimic both tinnitus development and perception, and test the tentative models with brain and behavioral experiments. With a special focus on tinnitus research, we review recent work at the intersection of artificial intelligence, psychology and neuroscience, indicating a new research agenda that follows the idea that experiments will yield theoretical insight only when employed to test brain-computational models. This view challenges the popular belief, that tinnitus research is primarily data limited, and that producing large, multi-modal, and complex datasets, analyzed with advanced data analysis algorithms, will finally lead to fundamental insights into how tinnitus emerges. However, there is converging evidence that although modern technologies allow assessing neural activity in unprecedentedly rich ways in both, animals and humans, empirical testing one verbally defined hypothesis about tinnitus after another, will never lead to a mechanistic understanding. Instead, hypothesis testing needs to be complemented with the construction of computational models that generate verifiable predictions. We argue, that even though, contemporary artificial intelligence and machine learning approaches largely lack biological plausibility, the models to be constructed will have to draw on concepts from these fields, since they have already proven to do well in modeling brain function. Nevertheless, biological fidelity will have to be increased successively, leading to ever better and fine-grained models, allowing at the end for even testing possible treatment strategies in silico, before application in animal or patient studies.
[ { "created": "Mon, 5 Oct 2020 10:55:03 GMT", "version": "v1" } ]
2020-10-06
[ [ "Krauss", "Patrick", "" ], [ "Schilling", "Achim", "" ] ]
In order to gain a mechanistic understanding of how tinnitus emerges in the brain, we must build biologically plausible computational models that mimic both tinnitus development and perception, and test the tentative models with brain and behavioral experiments. With a special focus on tinnitus research, we review recent work at the intersection of artificial intelligence, psychology and neuroscience, indicating a new research agenda that follows the idea that experiments will yield theoretical insight only when employed to test brain-computational models. This view challenges the popular belief, that tinnitus research is primarily data limited, and that producing large, multi-modal, and complex datasets, analyzed with advanced data analysis algorithms, will finally lead to fundamental insights into how tinnitus emerges. However, there is converging evidence that although modern technologies allow assessing neural activity in unprecedentedly rich ways in both, animals and humans, empirical testing one verbally defined hypothesis about tinnitus after another, will never lead to a mechanistic understanding. Instead, hypothesis testing needs to be complemented with the construction of computational models that generate verifiable predictions. We argue, that even though, contemporary artificial intelligence and machine learning approaches largely lack biological plausibility, the models to be constructed will have to draw on concepts from these fields, since they have already proven to do well in modeling brain function. Nevertheless, biological fidelity will have to be increased successively, leading to ever better and fine-grained models, allowing at the end for even testing possible treatment strategies in silico, before application in animal or patient studies.
1607.07970
Krzysztof Bartoszek
Krzysztof Bartoszek, Sylvain Gl\'emin, Ingemar Kaj, Martin Lascoux
The Ornstein-Uhlenbeck process with migration: evolution with interactions
null
Journal of Theoretical Biology 429:35-45, 2017
10.1016/j.jtbi.2017.06.011
null
q-bio.PE math.PR stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Ornstein-Uhlenbeck (OU) process plays a major role in the analysis of the evolution of phenotypic traits along phylogenies. The standard OU process includes drift and stabilizing selection and assumes that species evolve independently. However, especially in plants, there is ample evidence of hybridization and introgression during evolution. In this work we present a statistical approach with analytical solutions that allows for the inclusion of adaptation and migration in a common phylogenetic framework. We furthermore present a detailed simulation study that clearly indicates the adverse effects of ignoring migration. Similarity between species due to migration could be misinterpreted as very strong convergent evolution without proper correction for these additional dependencies. Our model can also be useful for studying local adaptation among populations within the same species. Finally, we show that our model can be interpreted in terms of ecological interactions between species, providing a general framework for the evolution of traits between "interacting" species or populations.
[ { "created": "Wed, 27 Jul 2016 06:10:03 GMT", "version": "v1" } ]
2020-11-23
[ [ "Bartoszek", "Krzysztof", "" ], [ "Glémin", "Sylvain", "" ], [ "Kaj", "Ingemar", "" ], [ "Lascoux", "Martin", "" ] ]
The Ornstein-Uhlenbeck (OU) process plays a major role in the analysis of the evolution of phenotypic traits along phylogenies. The standard OU process includes drift and stabilizing selection and assumes that species evolve independently. However, especially in plants, there is ample evidence of hybridization and introgression during evolution. In this work we present a statistical approach with analytical solutions that allows for the inclusion of adaptation and migration in a common phylogenetic framework. We furthermore present a detailed simulation study that clearly indicates the adverse effects of ignoring migration. Similarity between species due to migration could be misinterpreted as very strong convergent evolution without proper correction for these additional dependencies. Our model can also be useful for studying local adaptation among populations within the same species. Finally, we show that our model can be interpreted in terms of ecological interactions between species, providing a general framework for the evolution of traits between "interacting" species or populations.
1006.0018
Teruhiko Yoneyama
Teruhiko Yoneyama and Mukkai S. Krishnamoorthy
Simulating the Spread of Influenza Pandemic of 2009 Considering International Traffic
null
null
null
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by/3.0/
Pandemics have the potential to cause immense disruption and damage to communities and societies. In this paper, we model the Influenza Pandemic of 2009. We propose a hybrid model to determine how the pandemic spreads through the world. The model considers both the SEIR-based model for local areas and the network model for global connection between countries referring to data on international travelers. Our interest is to reproduce the situation using the data of early stage of pandemic and to predict the future transition by extending the simulation cycle. Without considering the tendency of seasonal flu, the simulation does not predict the second peak of the pandemic in the real world. However, considering the seasonal tendency, the simulation result predicts the next peak in winter. Thus we consider the seasonal tendency is an important factor for the spreading of the pandemic.
[ { "created": "Tue, 11 May 2010 22:10:23 GMT", "version": "v1" } ]
2010-06-02
[ [ "Yoneyama", "Teruhiko", "" ], [ "Krishnamoorthy", "Mukkai S.", "" ] ]
Pandemics have the potential to cause immense disruption and damage to communities and societies. In this paper, we model the Influenza Pandemic of 2009. We propose a hybrid model to determine how the pandemic spreads through the world. The model considers both the SEIR-based model for local areas and the network model for global connection between countries referring to data on international travelers. Our interest is to reproduce the situation using the data of early stage of pandemic and to predict the future transition by extending the simulation cycle. Without considering the tendency of seasonal flu, the simulation does not predict the second peak of the pandemic in the real world. However, considering the seasonal tendency, the simulation result predicts the next peak in winter. Thus we consider the seasonal tendency is an important factor for the spreading of the pandemic.
1111.4779
David Lukatsky
Ariel Afek, Itamar Sela, Noa Musa-Lempel, and David B. Lukatsky
Nonspecific transcription factor-DNA binding influences nucleosome occupancy in yeast
null
Biophysical Journal, Volume 101, Issue 10, 2465-2475 (2011)
10.1016/j.bpj.2011.10.012
null
q-bio.BM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantitative understanding of the principles regulating nucleosome occupancy on a genome-wide level is a central issue in eukaryotic genomics. Here, we address this question using budding yeast, Saccharomyces cerevisiae, as a model organism. We perform a genome-wide computational analysis of nonspecific transcription factor (TF)-DNA binding free energy landscape, and compare this landscape with experimentally determined nucleosome binding preferences. We show that DNA regions with enhanced nonspecific TF-DNA binding are statistically significantly depleted of nucleosomes. We suggest therefore that the competition between TFs with histones for nonspecific binding to genomic sequences might be an important mechanism influencing nucleosome-binding preferences in vivo. We also predict that poly(dA:dT) and poly(dC:dG) tracts represent genomic elements with the strongest propensity for nonspecific TF-DNA binding, thus allowing TFs to outcompete nucleosomes at these elements. Our results suggest that nonspecific TF-DNA binding might provide a barrier for statistical positioning of nucleosomes throughout the yeast genome. We predict that the strength of this barrier increases with the concentration of DNA binding proteins in a cell. We discuss the connection of the proposed mechanism with the recently discovered pathway of active nucleosome reconstitution.
[ { "created": "Mon, 21 Nov 2011 07:57:40 GMT", "version": "v1" } ]
2011-11-22
[ [ "Afek", "Ariel", "" ], [ "Sela", "Itamar", "" ], [ "Musa-Lempel", "Noa", "" ], [ "Lukatsky", "David B.", "" ] ]
Quantitative understanding of the principles regulating nucleosome occupancy on a genome-wide level is a central issue in eukaryotic genomics. Here, we address this question using budding yeast, Saccharomyces cerevisiae, as a model organism. We perform a genome-wide computational analysis of nonspecific transcription factor (TF)-DNA binding free energy landscape, and compare this landscape with experimentally determined nucleosome binding preferences. We show that DNA regions with enhanced nonspecific TF-DNA binding are statistically significantly depleted of nucleosomes. We suggest therefore that the competition between TFs with histones for nonspecific binding to genomic sequences might be an important mechanism influencing nucleosome-binding preferences in vivo. We also predict that poly(dA:dT) and poly(dC:dG) tracts represent genomic elements with the strongest propensity for nonspecific TF-DNA binding, thus allowing TFs to outcompete nucleosomes at these elements. Our results suggest that nonspecific TF-DNA binding might provide a barrier for statistical positioning of nucleosomes throughout the yeast genome. We predict that the strength of this barrier increases with the concentration of DNA binding proteins in a cell. We discuss the connection of the proposed mechanism with the recently discovered pathway of active nucleosome reconstitution.
1603.00459
Renaud Bastien
Renaud Bastien, Yasmine Meroz
The Kinematics of Plant Nutation Reveals a Simple Relation Between Curvature and the Orientation of Differential Growth
null
null
10.1371/journal.pcbi.1005238
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
Nutation is an oscillatory movement that plants display during their development. Despite its ubiquity among plants movements, the relation between the observed movement and the underlying biological mechanisms remains unclear. Here we show that the kinematics of the full organ in 3D gives a simple picture of plant nutation, where the orientation of the curvature along the main axis of the organ aligns with the direction of maximal differential growth. Within this framework we reexamine the validity of widely used experimental measurements of the apical tip as markers of growth dynamics. We show that though this relation is correct under certain conditions, it does not generally hold, and is not sufficient to uncover the specific role of each mechanism. As an example we re-interpret previously measured experimental observations using our model.
[ { "created": "Tue, 1 Mar 2016 11:41:46 GMT", "version": "v1" }, { "created": "Tue, 25 Oct 2016 15:10:41 GMT", "version": "v2" } ]
2017-02-08
[ [ "Bastien", "Renaud", "" ], [ "Meroz", "Yasmine", "" ] ]
Nutation is an oscillatory movement that plants display during their development. Despite its ubiquity among plants movements, the relation between the observed movement and the underlying biological mechanisms remains unclear. Here we show that the kinematics of the full organ in 3D gives a simple picture of plant nutation, where the orientation of the curvature along the main axis of the organ aligns with the direction of maximal differential growth. Within this framework we reexamine the validity of widely used experimental measurements of the apical tip as markers of growth dynamics. We show that though this relation is correct under certain conditions, it does not generally hold, and is not sufficient to uncover the specific role of each mechanism. As an example we re-interpret previously measured experimental observations using our model.
2010.12065
Nabit Bajwa
Nabit Bajwa, Kedar Bajwa, Atif Rana, M. Faique Shakeel, Kashif Haqqi and Suleiman Ali Khan
A generalized deep learning model for multi-disease Chest X-Ray diagnostics
null
null
null
null
q-bio.QM cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the generalizability of deep convolutional neural network (CNN) on the task of disease classification from chest x-rays collected over multiple sites. We systematically train the model using datasets from three independent sites with different patient populations: National Institute of Health (NIH), Stanford University Medical Centre (CheXpert), and Shifa International Hospital (SIH). We formulate a sequential training approach and demonstrate that the model produces generalized prediction performance using held out test sets from the three sites. Our model generalizes better when trained on multiple datasets, with the CheXpert-Shifa-NET model performing significantly better (p-values < 0.05) than the models trained on individual datasets for 3 out of the 4 distinct disease classes. The code for training the model will be made available open source at: www.github.com/link-to-code at the time of publication.
[ { "created": "Sat, 17 Oct 2020 18:57:40 GMT", "version": "v1" } ]
2020-10-26
[ [ "Bajwa", "Nabit", "" ], [ "Bajwa", "Kedar", "" ], [ "Rana", "Atif", "" ], [ "Shakeel", "M. Faique", "" ], [ "Haqqi", "Kashif", "" ], [ "Khan", "Suleiman Ali", "" ] ]
We investigate the generalizability of deep convolutional neural network (CNN) on the task of disease classification from chest x-rays collected over multiple sites. We systematically train the model using datasets from three independent sites with different patient populations: National Institute of Health (NIH), Stanford University Medical Centre (CheXpert), and Shifa International Hospital (SIH). We formulate a sequential training approach and demonstrate that the model produces generalized prediction performance using held out test sets from the three sites. Our model generalizes better when trained on multiple datasets, with the CheXpert-Shifa-NET model performing significantly better (p-values < 0.05) than the models trained on individual datasets for 3 out of the 4 distinct disease classes. The code for training the model will be made available open source at: www.github.com/link-to-code at the time of publication.
1407.5105
Ankit Khambhati
Ankit Khambhati, Brian Litt, Danielle S. Bassett
Dynamic network drivers of seizure generation, propagation and termination in human epilepsy
7 pages, 5 figures; Supplementary Materials: 5 pages, 3 figures, 1 table
null
10.1371/journal.pcbi.1004608
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Drug-resistant epilepsy is traditionally characterized by pathologic cortical tissue comprised of seizure-initiating `foci'. These `foci' are thought to be embedded within an epileptic network whose functional architecture dynamically reorganizes during seizures through synchronous and asynchronous neurophysiologic processes. Critical to understanding these dynamics is identifying the synchronous connections that link foci to surrounding tissue and investigating how these connections facilitate seizure generation and termination. We use intracranial recordings from neocortical epilepsy patients undergoing pre-surgical evaluation to analyze functional connectivity before and during seizures. We develop and apply a novel technique to track network reconfiguration in time and to parse these reconfiguration dynamics into distinct seizure states, each characterized by unique patterns of network connections that differ in their strength and topography. Our approach suggests that seizures are generated when the synchronous relationships that isolate seizure `foci' from the surrounding epileptic network are broken down. As seizures progress, foci reappear as isolated subnetworks, marking a shift in network state that may aid seizure termination. Collectively, our observations have important theoretical implications for understanding the spatial involvement of distributed cortical structures in the dynamics of seizure generation, propagation and termination, and have practical significance in determining which circuits to modulate with implantable devices.
[ { "created": "Fri, 18 Jul 2014 20:12:16 GMT", "version": "v1" } ]
2016-02-17
[ [ "Khambhati", "Ankit", "" ], [ "Litt", "Brian", "" ], [ "Bassett", "Danielle S.", "" ] ]
Drug-resistant epilepsy is traditionally characterized by pathologic cortical tissue comprised of seizure-initiating `foci'. These `foci' are thought to be embedded within an epileptic network whose functional architecture dynamically reorganizes during seizures through synchronous and asynchronous neurophysiologic processes. Critical to understanding these dynamics is identifying the synchronous connections that link foci to surrounding tissue and investigating how these connections facilitate seizure generation and termination. We use intracranial recordings from neocortical epilepsy patients undergoing pre-surgical evaluation to analyze functional connectivity before and during seizures. We develop and apply a novel technique to track network reconfiguration in time and to parse these reconfiguration dynamics into distinct seizure states, each characterized by unique patterns of network connections that differ in their strength and topography. Our approach suggests that seizures are generated when the synchronous relationships that isolate seizure `foci' from the surrounding epileptic network are broken down. As seizures progress, foci reappear as isolated subnetworks, marking a shift in network state that may aid seizure termination. Collectively, our observations have important theoretical implications for understanding the spatial involvement of distributed cortical structures in the dynamics of seizure generation, propagation and termination, and have practical significance in determining which circuits to modulate with implantable devices.
q-bio/0703006
Thierry Rabilloud
Mireille Chevallet (BBSI), H\'el\`ene Diemer (IPHC), Alain van Dorsselaer (IPHC), Christian Villiers, Thierry Rabilloud (BBSI)
Toward a better analysis of secreted proteins: the example of the myeloid cells secretome
sous presse sans Proteomics
null
null
null
q-bio.GN
null
The analysis of secreted proteins represents a challenge for current proteomics techniques. Proteins are usually secreted at low concentrations in the culture media, which makes their recovery difficult. In addition, culture media are rich in salts and other compounds interfering with most proteomics techniques, which makes selective precipitation of proteins almost mandatory for a correct subsequent proteomics analysis. Last but not least, the non-secreted proteins liberated in the culture medium upon lysis of a few dead cells heavily contaminate the so-called secreted proteins preparations. Several techniques have been used in the past for concentration of proteins secreted in culture media. These techniques present several drawbacks, such as coprecipitation of salts or poor yields at low protein concentrations. Improved techniques based on carrier-assisted trichloroacetic acid precipitation are described and discussed in this paper. These techniques have been used to analyse the secretome of myeloid cells (macrophages, dendritic cells) and enabled to analyze proteins secreted at concentrations close to 1 ng/ml, thereby allowing to detect some of the cytokines (TNF, IL-12) secreted by the myeloid cells upon activation by bacterial products.
[ { "created": "Fri, 2 Mar 2007 10:35:11 GMT", "version": "v1" } ]
2016-08-14
[ [ "Chevallet", "Mireille", "", "BBSI" ], [ "Diemer", "Hélène", "", "IPHC" ], [ "van Dorsselaer", "Alain", "", "IPHC" ], [ "Villiers", "Christian", "", "BBSI" ], [ "Rabilloud", "Thierry", "", "BBSI" ] ]
The analysis of secreted proteins represents a challenge for current proteomics techniques. Proteins are usually secreted at low concentrations in the culture media, which makes their recovery difficult. In addition, culture media are rich in salts and other compounds interfering with most proteomics techniques, which makes selective precipitation of proteins almost mandatory for a correct subsequent proteomics analysis. Last but not least, the non-secreted proteins liberated in the culture medium upon lysis of a few dead cells heavily contaminate the so-called secreted proteins preparations. Several techniques have been used in the past for concentration of proteins secreted in culture media. These techniques present several drawbacks, such as coprecipitation of salts or poor yields at low protein concentrations. Improved techniques based on carrier-assisted trichloroacetic acid precipitation are described and discussed in this paper. These techniques have been used to analyse the secretome of myeloid cells (macrophages, dendritic cells) and enabled to analyze proteins secreted at concentrations close to 1 ng/ml, thereby allowing to detect some of the cytokines (TNF, IL-12) secreted by the myeloid cells upon activation by bacterial products.
2403.18666
Kieren Sharma
Kieren Sharma, Lucia Marucci, Zahraa S. Abdallah
FluxGAT: Integrating Flux Sampling with Graph Neural Networks for Unbiased Gene Essentiality Classification
null
null
null
null
q-bio.QM q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene essentiality, the necessity of a specific gene for the survival of an organism, is crucial to our understanding of cellular processes and identifying drug targets. Experimental determination of gene essentiality requires large growth screens that are time-consuming and expensive, motivating the development of in-silico approaches. Existing methods predominantly utilise flux balance analysis (FBA), a constraint-based optimisation algorithm; however, they are fundamentally limited by the necessity of a predefined cellular objective function. This requirement introduces an element of observer bias, as the objective function often reflects the researcher's assumptions rather than the cell's biological goals. Here, we present FluxGAT, a graph neural network (GNN) model capable of predicting gene essentiality directly from graphical representations of flux sampling data. Flux sampling removes the need for objective functions, thereby eliminating observer bias. FluxGAT leverages the unique strengths of GNNs in learning representations of complex relationships within metabolic reaction networks. The success of our approach in predicting experimentally determined gene essentiality, with almost double the sensitivity of FBA, explores the possibility of predicting cellular phenotypes in cases when objectives are less understood. Thus, we demonstrate a method for more general gene essentiality predictions across a broader spectrum of biological systems and environments.
[ { "created": "Wed, 27 Mar 2024 15:09:46 GMT", "version": "v1" }, { "created": "Thu, 28 Mar 2024 14:52:36 GMT", "version": "v2" } ]
2024-03-29
[ [ "Sharma", "Kieren", "" ], [ "Marucci", "Lucia", "" ], [ "Abdallah", "Zahraa S.", "" ] ]
Gene essentiality, the necessity of a specific gene for the survival of an organism, is crucial to our understanding of cellular processes and identifying drug targets. Experimental determination of gene essentiality requires large growth screens that are time-consuming and expensive, motivating the development of in-silico approaches. Existing methods predominantly utilise flux balance analysis (FBA), a constraint-based optimisation algorithm; however, they are fundamentally limited by the necessity of a predefined cellular objective function. This requirement introduces an element of observer bias, as the objective function often reflects the researcher's assumptions rather than the cell's biological goals. Here, we present FluxGAT, a graph neural network (GNN) model capable of predicting gene essentiality directly from graphical representations of flux sampling data. Flux sampling removes the need for objective functions, thereby eliminating observer bias. FluxGAT leverages the unique strengths of GNNs in learning representations of complex relationships within metabolic reaction networks. The success of our approach in predicting experimentally determined gene essentiality, with almost double the sensitivity of FBA, explores the possibility of predicting cellular phenotypes in cases when objectives are less understood. Thus, we demonstrate a method for more general gene essentiality predictions across a broader spectrum of biological systems and environments.
2407.13514
Anass B. El-Yaagoubi
Anass B. El-Yaagoubi and Moo K. Chung and Hernando Ombao
Topological Analysis of Seizure-Induced Changes in Brain Hierarchy Through Effective Connectivity
null
null
null
null
q-bio.NC stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional Topological Data Analysis (TDA) methods, such as Persistent Homology (PH), rely on distance measures (e.g., cross-correlation, partial correlation, coherence, and partial coherence) that are symmetric by definition. While useful for studying topological patterns in functional brain connectivity, the main limitation of these methods is their inability to capture the directional dynamics - which is crucial for understanding effective brain connectivity. We propose the Causality-Based Topological Ranking (CBTR) method, which integrates Causal Inference (CI) to assess effective brain connectivity with Hodge Decomposition (HD) to rank brain regions based on their mutual influence. Our simulations confirm that the CBTR method accurately and consistently identifies hierarchical structures in multivariate time series data. Moreover, this method effectively identifies brain regions showing the most significant interaction changes with other regions during seizures using electroencephalogram (EEG) data. These results provide novel insights into the brain's hierarchical organization and illuminate the impact of seizures on its dynamics.
[ { "created": "Thu, 18 Jul 2024 13:45:08 GMT", "version": "v1" } ]
2024-07-19
[ [ "El-Yaagoubi", "Anass B.", "" ], [ "Chung", "Moo K.", "" ], [ "Ombao", "Hernando", "" ] ]
Traditional Topological Data Analysis (TDA) methods, such as Persistent Homology (PH), rely on distance measures (e.g., cross-correlation, partial correlation, coherence, and partial coherence) that are symmetric by definition. While useful for studying topological patterns in functional brain connectivity, the main limitation of these methods is their inability to capture the directional dynamics - which is crucial for understanding effective brain connectivity. We propose the Causality-Based Topological Ranking (CBTR) method, which integrates Causal Inference (CI) to assess effective brain connectivity with Hodge Decomposition (HD) to rank brain regions based on their mutual influence. Our simulations confirm that the CBTR method accurately and consistently identifies hierarchical structures in multivariate time series data. Moreover, this method effectively identifies brain regions showing the most significant interaction changes with other regions during seizures using electroencephalogram (EEG) data. These results provide novel insights into the brain's hierarchical organization and illuminate the impact of seizures on its dynamics.
2008.12104
Mai He
Mai He (1), Priya Skaria (1), Kasey Kreutz (1), Ling Chen (2), Ian Hagemann (1), Ebony B. Carter (3), Indira U. Mysorekar (1,3), D Michael Nelson (3), John Pfeifer (1), Louis P. Dehner (1) ((1) Department of Pathology & Immunology, Washington University in St. Louis School of Medicine, St. Louis, MO, USA (2) Division of Statistics, Washington University in St. Louis School of Medicine, St. Louis, MO, USA (3) Department of Obstetrics & Gynecology, Washington University School of Medicine in St. Louis, St. Louis, MO, USA)
Histopathology of Third Trimester Placenta from SARS-CoV-2-Positive Women
Two tables
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: This study aims to investigate whether maternal SARS-CoV-2 status affect placental pathology. Methods: A retrospective case-control study was conducted by reviewing charts and slides of placentas between April 1 to July 24, 2020. Clinical history of COVID-19 were searched in Pathology Database (CoPath). Controls were matched with SARS-CoV-2-negative women with singleton deliveries in the 3rd-trimester. Individual and group, pathological features were extracted from placental pathology reports. Results: Twenty-one 3rd-trimester, placentas from SARS-CoV-2-positive women were identified and compared to 20 placentas from SARS-CoV-2-negative women. There were no significant differences in individual or group gross or microscopic pathological features between the groups. Within the SARS-CoV-2+ group, there are no differences between symptomatic and asymptomatic women. Conclusion: Placentas from SARS-CoV-2-positive women do not demonstrate a specific pathological pattern. Pregnancy complicated with COVID-19 during the 3rd trimester does not have a demonstrable effect on placental structure and pathology.
[ { "created": "Wed, 5 Aug 2020 20:35:52 GMT", "version": "v1" } ]
2020-08-28
[ [ "He", "Mai", "" ], [ "Skaria", "Priya", "" ], [ "Kreutz", "Kasey", "" ], [ "Chen", "Ling", "" ], [ "Hagemann", "Ian", "" ], [ "Carter", "Ebony B.", "" ], [ "Mysorekar", "Indira U.", "" ], [ "Nelson", "D Michael", "" ], [ "Pfeifer", "John", "" ], [ "Dehner", "Louis P.", "" ] ]
Background: This study aims to investigate whether maternal SARS-CoV-2 status affect placental pathology. Methods: A retrospective case-control study was conducted by reviewing charts and slides of placentas between April 1 to July 24, 2020. Clinical history of COVID-19 were searched in Pathology Database (CoPath). Controls were matched with SARS-CoV-2-negative women with singleton deliveries in the 3rd-trimester. Individual and group, pathological features were extracted from placental pathology reports. Results: Twenty-one 3rd-trimester, placentas from SARS-CoV-2-positive women were identified and compared to 20 placentas from SARS-CoV-2-negative women. There were no significant differences in individual or group gross or microscopic pathological features between the groups. Within the SARS-CoV-2+ group, there are no differences between symptomatic and asymptomatic women. Conclusion: Placentas from SARS-CoV-2-positive women do not demonstrate a specific pathological pattern. Pregnancy complicated with COVID-19 during the 3rd trimester does not have a demonstrable effect on placental structure and pathology.
0912.5409
William Bialek
Gasper Tkacik, Elad Schneidman, Michael J. Berry II and William Bialek
Spin glass models for a network of real neurons
This is an extended version of arXiv:q-bio.NC/0611072
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ising models with pairwise interactions are the least structured, or maximum-entropy, probability distributions that exactly reproduce measured pairwise correlations between spins. Here we use this equivalence to construct Ising models that describe the correlated spiking activity of populations of 40 neurons in the salamander retina responding to natural movies. We show that pairwise interactions between neurons account for observed higher-order correlations, and that for groups of 10 or more neurons pairwise interactions can no longer be regarded as small perturbations in an independent system. We then construct network ensembles that generalize the network instances observed in the experiment, and study their thermodynamic behavior and coding capacity. Based on this construction, we can also create synthetic networks of 120 neurons, and find that with increasing size the networks operate closer to a critical point and start exhibiting collective behaviors reminiscent of spin glasses. We examine closely two such behaviors that could be relevant for neural code: tuning of the network to the critical point to maximize the ability to encode diverse stimuli, and using the metastable states of the Ising Hamiltonian as neural code words.
[ { "created": "Wed, 30 Dec 2009 02:59:05 GMT", "version": "v1" } ]
2009-12-31
[ [ "Tkacik", "Gasper", "" ], [ "Schneidman", "Elad", "" ], [ "Berry", "Michael J.", "II" ], [ "Bialek", "William", "" ] ]
Ising models with pairwise interactions are the least structured, or maximum-entropy, probability distributions that exactly reproduce measured pairwise correlations between spins. Here we use this equivalence to construct Ising models that describe the correlated spiking activity of populations of 40 neurons in the salamander retina responding to natural movies. We show that pairwise interactions between neurons account for observed higher-order correlations, and that for groups of 10 or more neurons pairwise interactions can no longer be regarded as small perturbations in an independent system. We then construct network ensembles that generalize the network instances observed in the experiment, and study their thermodynamic behavior and coding capacity. Based on this construction, we can also create synthetic networks of 120 neurons, and find that with increasing size the networks operate closer to a critical point and start exhibiting collective behaviors reminiscent of spin glasses. We examine closely two such behaviors that could be relevant for neural code: tuning of the network to the critical point to maximize the ability to encode diverse stimuli, and using the metastable states of the Ising Hamiltonian as neural code words.
q-bio/0310014
Balint Szabo
B. Szabo, Zs. Kornyei, J. Zach, D. Selmeczi, G. Csucs, A. Czirok, T. Vicsek
Auto-reverse nuclear migration in bipolar mammalian cells on micropatterned surfaces
Figures and supplemental videos: http://esr.elte.hu/nuclearmotility
Cell Motility and the Cytoskeleton 59(1), 38-49 (2004)
null
null
q-bio.CB
null
A novel assay based on micropatterning and time-lapse microscopy has been developed for the study of nuclear migration dynamics in cultured mammalian cells. When cultured on 10-20 um wide adhesive stripes, the motility of C6 glioma and primary mouse fibroblast cells is diminished. Nevertheless, nuclei perform an unexpected auto-reverse motion: when a migrating nucleus approaches the leading edge, it decelerates, changes the direction of motion and accelerates to move toward the other end of the elongated cell. During this process cells show signs of polarization closely following the direction of nuclear movement. The observed nuclear movement requires a functioning microtubular system, as revealed by experiments disrupting the main cytoskeletal components with specific drugs. On the basis of our results we argue that auto-reverse nuclear migration is due to forces determined by the interplay of microtubule dynamics and the changing position of the microtubule organizing center as the nucleus reaches the leading edge. Our assay recapitulates specific features of nuclear migration (cell polarization, oscillatory nuclear movement), while allows the systematic study of a large number of individual cells. In particular, our experiments yielded the first direct evidence of reversive nuclear motion in mammalian cells, induced by attachment constraints.
[ { "created": "Tue, 14 Oct 2003 09:18:27 GMT", "version": "v1" }, { "created": "Sun, 9 May 2004 13:12:21 GMT", "version": "v2" }, { "created": "Tue, 17 Aug 2004 11:32:07 GMT", "version": "v3" } ]
2009-09-29
[ [ "Szabo", "B.", "" ], [ "Kornyei", "Zs.", "" ], [ "Zach", "J.", "" ], [ "Selmeczi", "D.", "" ], [ "Csucs", "G.", "" ], [ "Czirok", "A.", "" ], [ "Vicsek", "T.", "" ] ]
A novel assay based on micropatterning and time-lapse microscopy has been developed for the study of nuclear migration dynamics in cultured mammalian cells. When cultured on 10-20 um wide adhesive stripes, the motility of C6 glioma and primary mouse fibroblast cells is diminished. Nevertheless, nuclei perform an unexpected auto-reverse motion: when a migrating nucleus approaches the leading edge, it decelerates, changes the direction of motion and accelerates to move toward the other end of the elongated cell. During this process cells show signs of polarization closely following the direction of nuclear movement. The observed nuclear movement requires a functioning microtubular system, as revealed by experiments disrupting the main cytoskeletal components with specific drugs. On the basis of our results we argue that auto-reverse nuclear migration is due to forces determined by the interplay of microtubule dynamics and the changing position of the microtubule organizing center as the nucleus reaches the leading edge. Our assay recapitulates specific features of nuclear migration (cell polarization, oscillatory nuclear movement), while allows the systematic study of a large number of individual cells. In particular, our experiments yielded the first direct evidence of reversive nuclear motion in mammalian cells, induced by attachment constraints.
1602.01730
Robert Wilkinson mr
Robert R. Wilkinson, Frank G. Ball, Kieran J. Sharkey
The deterministic Kermack-McKendrick model bounds the general stochastic epidemic
null
J. Appl. Probab. Vol. 53, No. 4 (2016)
null
null
q-bio.PE cond-mat.stat-mech math.PR physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We prove that, for Poisson transmission and recovery processes, the classic Susceptible $\to$ Infected $\to$ Recovered (SIR) epidemic model of Kermack and McKendrick provides, for any given time $t>0$, a strict lower bound on the expected number of suscpetibles and a strict upper bound on the expected number of recoveries in the general stochastic SIR epidemic. The proof is based on the recent message passing representation of SIR epidemics applied to a complete graph.
[ { "created": "Thu, 4 Feb 2016 16:26:19 GMT", "version": "v1" }, { "created": "Mon, 21 Mar 2016 11:54:41 GMT", "version": "v2" }, { "created": "Fri, 10 Feb 2017 14:10:25 GMT", "version": "v3" } ]
2017-02-13
[ [ "Wilkinson", "Robert R.", "" ], [ "Ball", "Frank G.", "" ], [ "Sharkey", "Kieran J.", "" ] ]
We prove that, for Poisson transmission and recovery processes, the classic Susceptible $\to$ Infected $\to$ Recovered (SIR) epidemic model of Kermack and McKendrick provides, for any given time $t>0$, a strict lower bound on the expected number of suscpetibles and a strict upper bound on the expected number of recoveries in the general stochastic SIR epidemic. The proof is based on the recent message passing representation of SIR epidemics applied to a complete graph.
2205.03635
Laurent Perrinet
Jean-Nicolas J\'er\'emie, Laurent U Perrinet
Ultrafast Image Categorization in Biology and Neural Models
null
null
10.3390/vision7020029
null
q-bio.NC cs.CV
http://creativecommons.org/licenses/by/4.0/
Humans are able to categorize images very efficiently, in particular to detect the presence of an animal very quickly. Recently, deep learning algorithms based on convolutional neural networks (CNNs) have achieved higher than human accuracy for a wide range of visual categorization tasks. However, the tasks on which these artificial networks are typically trained and evaluated tend to be highly specialized and do not generalize well, e.g., accuracy drops after image rotation. In this respect, biological visual systems are more flexible and efficient than artificial systems for more general tasks, such as recognizing an animal. To further the comparison between biological and artificial neural networks, we re-trained the standard VGG 16 CNN on two independent tasks that are ecologically relevant to humans: detecting the presence of an animal or an artifact. We show that re-training the network achieves a human-like level of performance, comparable to that reported in psychophysical tasks. In addition, we show that the categorization is better when the outputs of the models are combined. Indeed, animals (e.g., lions) tend to be less present in photographs that contain artifacts (e.g., buildings). Furthermore, these re-trained models were able to reproduce some unexpected behavioral observations from human psychophysics, such as robustness to rotation (e.g., an upside-down or tilted image) or to a grayscale transformation. Finally, we quantified the number of CNN layers required to achieve such performance and showed that good accuracy for ultrafast image categorization can be achieved with only a few layers, challenging the belief that image recognition requires deep sequential analysis of visual objects.
[ { "created": "Sat, 7 May 2022 11:19:40 GMT", "version": "v1" }, { "created": "Thu, 12 May 2022 14:40:25 GMT", "version": "v2" }, { "created": "Tue, 11 Oct 2022 12:01:17 GMT", "version": "v3" }, { "created": "Wed, 31 May 2023 05:30:51 GMT", "version": "v4" } ]
2023-06-01
[ [ "Jérémie", "Jean-Nicolas", "" ], [ "Perrinet", "Laurent U", "" ] ]
Humans are able to categorize images very efficiently, in particular to detect the presence of an animal very quickly. Recently, deep learning algorithms based on convolutional neural networks (CNNs) have achieved higher than human accuracy for a wide range of visual categorization tasks. However, the tasks on which these artificial networks are typically trained and evaluated tend to be highly specialized and do not generalize well, e.g., accuracy drops after image rotation. In this respect, biological visual systems are more flexible and efficient than artificial systems for more general tasks, such as recognizing an animal. To further the comparison between biological and artificial neural networks, we re-trained the standard VGG 16 CNN on two independent tasks that are ecologically relevant to humans: detecting the presence of an animal or an artifact. We show that re-training the network achieves a human-like level of performance, comparable to that reported in psychophysical tasks. In addition, we show that the categorization is better when the outputs of the models are combined. Indeed, animals (e.g., lions) tend to be less present in photographs that contain artifacts (e.g., buildings). Furthermore, these re-trained models were able to reproduce some unexpected behavioral observations from human psychophysics, such as robustness to rotation (e.g., an upside-down or tilted image) or to a grayscale transformation. Finally, we quantified the number of CNN layers required to achieve such performance and showed that good accuracy for ultrafast image categorization can be achieved with only a few layers, challenging the belief that image recognition requires deep sequential analysis of visual objects.
2311.04880
Christopher Miles
Christopher E. Miles, Scott A. McKinley, Fangyuan Ding, Richard B. Lehoucq
Inferring stochastic rates from heterogeneous snapshots of particle positions
33 pages, 6 figures
Bulletin of Mathematical Biology 86, 74 (2024)
10.1007/s11538-024-01301-4
null
q-bio.SC math.ST physics.bio-ph stat.AP stat.TH
http://creativecommons.org/licenses/by/4.0/
Many imaging techniques for biological systems -- like fixation of cells coupled with fluorescence microscopy -- provide sharp spatial resolution in reporting locations of individuals at a single moment in time but also destroy the dynamics they intend to capture. These snapshot observations contain no information about individual trajectories, but still encode information about movement and demographic dynamics, especially when combined with a well-motivated biophysical model. The relationship between spatially evolving populations and single-moment representations of their collective locations is well-established with partial differential equations (PDEs) and their inverse problems. However, experimental data is commonly a set of locations whose number is insufficient to approximate a continuous-in-space PDE solution. Here, motivated by popular subcellular imaging data of gene expression, we embrace the stochastic nature of the data and investigate the mathematical foundations of parametrically inferring demographic rates from snapshots of particles undergoing birth, diffusion, and death in a nuclear or cellular domain. Toward inference, we rigorously derive a connection between individual particle paths and their presentation as a Poisson spatial process. Using this framework, we investigate the properties of the resulting inverse problem and study factors that affect quality of inference. One pervasive feature of this experimental regime is the presence of cell-to-cell heterogeneity. Rather than being a hindrance, we show that cell-to-cell geometric heterogeneity can increase the quality of inference on dynamics for certain parameter regimes. Altogether, the results serve as a basis for more detailed investigations of subcellular spatial patterns of RNA molecules and other stochastically evolving populations that can only be observed for single instants in their time evolution.
[ { "created": "Wed, 8 Nov 2023 18:36:41 GMT", "version": "v1" } ]
2024-05-15
[ [ "Miles", "Christopher E.", "" ], [ "McKinley", "Scott A.", "" ], [ "Ding", "Fangyuan", "" ], [ "Lehoucq", "Richard B.", "" ] ]
Many imaging techniques for biological systems -- like fixation of cells coupled with fluorescence microscopy -- provide sharp spatial resolution in reporting locations of individuals at a single moment in time but also destroy the dynamics they intend to capture. These snapshot observations contain no information about individual trajectories, but still encode information about movement and demographic dynamics, especially when combined with a well-motivated biophysical model. The relationship between spatially evolving populations and single-moment representations of their collective locations is well-established with partial differential equations (PDEs) and their inverse problems. However, experimental data is commonly a set of locations whose number is insufficient to approximate a continuous-in-space PDE solution. Here, motivated by popular subcellular imaging data of gene expression, we embrace the stochastic nature of the data and investigate the mathematical foundations of parametrically inferring demographic rates from snapshots of particles undergoing birth, diffusion, and death in a nuclear or cellular domain. Toward inference, we rigorously derive a connection between individual particle paths and their presentation as a Poisson spatial process. Using this framework, we investigate the properties of the resulting inverse problem and study factors that affect quality of inference. One pervasive feature of this experimental regime is the presence of cell-to-cell heterogeneity. Rather than being a hindrance, we show that cell-to-cell geometric heterogeneity can increase the quality of inference on dynamics for certain parameter regimes. Altogether, the results serve as a basis for more detailed investigations of subcellular spatial patterns of RNA molecules and other stochastically evolving populations that can only be observed for single instants in their time evolution.
1707.00039
Dan Willard
Dan E. Willard
Implications of the Trivers-Willard Sex Ratio Hypothesis for Avian Species and Poultry Production, And a Summary of the Historic Context of this Research
A 30-minute invited talk summarizing the contents of this article was presented on June 18, 2017 at the Binghamton NEEPS-2017 Conference. The Version-1 June 27 (2017) draft of this paper was similar to the 30-minute talk I gave at Binghamton University, and it was disseminated nine days after this talk. This second draft is more detailed and polished because it contains Section 4's added evidence
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
At a theoretical level, the Trivers-Willard Sex Ratio Hypothesis applies to both avian species and mammals. This article, however, conjectures that at the statistical level, sex ratio effects are likely to produce sharper numerical variations among birds than among mammals. We explain this greater statistical variation should likely have beneficial implications for increasing the efficiency of world-wide poultry egg (and perhaps also meat) production.
[ { "created": "Tue, 27 Jun 2017 21:50:22 GMT", "version": "v1" }, { "created": "Wed, 30 Aug 2017 16:23:57 GMT", "version": "v2" } ]
2017-08-31
[ [ "Willard", "Dan E.", "" ] ]
At a theoretical level, the Trivers-Willard Sex Ratio Hypothesis applies to both avian species and mammals. This article, however, conjectures that at the statistical level, sex ratio effects are likely to produce sharper numerical variations among birds than among mammals. We explain this greater statistical variation should likely have beneficial implications for increasing the efficiency of world-wide poultry egg (and perhaps also meat) production.
1705.02380
Dwayne John
Dwayne John
A Nonconventional Analysis of CD$4^{+}$ and CD$8^{+}$ T Cell Responses During and After Acute Lymphocytic Choriomeningitis Virus Infection
3 Figures, 1 table
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A mathematical model from a previous work was re-fitted and analyzed for experimental data regarding the cellular immune response to the lymphocytic choriomeningitis virus. Specifically, the $CD8^{+}$ T cell response to six MHC class I-restricted epitopes (GP* and NP*) and $CD4^{+}$ T cell responses to two MHC class II-restricted epitopes\cite{de2003different}. In this work, we use calibration through log likelihood maximization to investigate if different parameters can produce a more accurate fit of the model presented previously in the paper titled \textit{Different Dynamics of CD$4^{+}$ and CD$8^{+}$ T Cell Responses During and After Acute Lymphocytic Choriomeningitis Virus Infection}\cite{de2003different}
[ { "created": "Fri, 5 May 2017 19:53:34 GMT", "version": "v1" } ]
2017-05-09
[ [ "John", "Dwayne", "" ] ]
A mathematical model from a previous work was re-fitted and analyzed for experimental data regarding the cellular immune response to the lymphocytic choriomeningitis virus. Specifically, the $CD8^{+}$ T cell response to six MHC class I-restricted epitopes (GP* and NP*) and $CD4^{+}$ T cell responses to two MHC class II-restricted epitopes\cite{de2003different}. In this work, we use calibration through log likelihood maximization to investigate if different parameters can produce a more accurate fit of the model presented previously in the paper titled \textit{Different Dynamics of CD$4^{+}$ and CD$8^{+}$ T Cell Responses During and After Acute Lymphocytic Choriomeningitis Virus Infection}\cite{de2003different}
1805.03602
Surya Saha
Prashant S. Hosmani, Teresa Shippy, Sherry Miller, Joshua B. Benoit, Monica Munoz-Torres, Mirella Flores, Lukas A. Mueller, Helen Wiersma-Koch, Tom D'elia, Susan J. Brown and Surya Saha
A quick guide for student-driven community genome annotation
null
null
10.1371/journal.pcbi.1006682
null
q-bio.GN
http://creativecommons.org/publicdomain/zero/1.0/
High quality gene models are necessary to expand the molecular and genetic tools available for a target organism, but these are available for only a handful of model organisms that have undergone extensive curation and experimental validation over the course of many years. The majority of gene models present in biological databases today have been identified in draft genome assemblies using automated annotation pipelines that are frequently based on orthologs from distantly related model organisms. Manual curation is time consuming and often requires substantial expertise, but is instrumental in improving gene model structure and identification. Manual annotation may seem to be a daunting and cost-prohibitive task for small research communities but involving undergraduates in community genome annotation consortiums can be mutually beneficial for both education and improved genomic resources. We outline a workflow for efficient manual annotation driven by a team of primarily undergraduate annotators. This model can be scaled to large teams and includes quality control processes through incremental evaluation. Moreover, it gives students an opportunity to increase their understanding of genome biology and to participate in scientific research in collaboration with peers and senior researchers at multiple institutions.
[ { "created": "Wed, 9 May 2018 16:01:11 GMT", "version": "v1" }, { "created": "Tue, 16 Oct 2018 10:04:57 GMT", "version": "v2" } ]
2019-06-19
[ [ "Hosmani", "Prashant S.", "" ], [ "Shippy", "Teresa", "" ], [ "Miller", "Sherry", "" ], [ "Benoit", "Joshua B.", "" ], [ "Munoz-Torres", "Monica", "" ], [ "Flores", "Mirella", "" ], [ "Mueller", "Lukas A.", "" ], [ "Wiersma-Koch", "Helen", "" ], [ "D'elia", "Tom", "" ], [ "Brown", "Susan J.", "" ], [ "Saha", "Surya", "" ] ]
High quality gene models are necessary to expand the molecular and genetic tools available for a target organism, but these are available for only a handful of model organisms that have undergone extensive curation and experimental validation over the course of many years. The majority of gene models present in biological databases today have been identified in draft genome assemblies using automated annotation pipelines that are frequently based on orthologs from distantly related model organisms. Manual curation is time consuming and often requires substantial expertise, but is instrumental in improving gene model structure and identification. Manual annotation may seem to be a daunting and cost-prohibitive task for small research communities but involving undergraduates in community genome annotation consortiums can be mutually beneficial for both education and improved genomic resources. We outline a workflow for efficient manual annotation driven by a team of primarily undergraduate annotators. This model can be scaled to large teams and includes quality control processes through incremental evaluation. Moreover, it gives students an opportunity to increase their understanding of genome biology and to participate in scientific research in collaboration with peers and senior researchers at multiple institutions.
2011.05321
Jean-Fran\c{c}ois Cornet
B\'erang\`ere Farges, C\'eline Laroche, Jean-Fran\c{c}ois Cornet and Claude-Gilles Dussap
Spectral Kinetic Modeling and Long-term Behavior Assessment of Arthrospira platensis Growth in Photobioreactor under Red (620 nm) Light Illumination
null
Biotechnology Progress, 2009, 25, 151-162
10.1021/bp.95
null
q-bio.OT physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ability to cultivate the cyanobacterium Arhtrospira platensis in artificially lightened photobioreactors using high energetic efficiency (quasi-monochromatic) red LED was investigated. In order to reach the same maximal productivities as with the polychromatic lightening control conditions (red + blue, P/2e- = 1.275), the need to work with an optimal range of wavelength around 620 nm was first established on batch and continuous cultures. The long-term physiological and kinetic behavior was then verified in a continuous photobioreactor illuminated only with red (620 nm) LED, showing that the maximum productivities can be maintained over 30 residence times with only minor changes in the pigment content of the cells corresponding to a well-known adaptation mechanism of the photosystems, but without any effect on growth and stoichiometry. For both poly and monochromatic incident light inputs, a predictive spectral knowledge model was proposed and validated for the first time, allowing the calculation of the kinetics and stoichiometry observed in any photobioreactor cultivating A. platensis, or other cyanobacteria if the parameters were updated. It is shown that the photon flux (with a specified wavelength) must be used instead of light energy flux as a relevant control variable for the growth. The experimental and theoretical results obtained in this study demonstrate that it is possible to save the energy consumed by the lightening device of photobioreactors using red LED, the spectral range of which is defined according to the action spectrum of photosynthesis. This appears to be crucial information for applications in which the energy must be rationalized, as it is the case for life support systems in closed environments like a permanent spatial base or a submarine.
[ { "created": "Sat, 7 Nov 2020 14:47:21 GMT", "version": "v1" } ]
2020-11-11
[ [ "Farges", "Bérangère", "" ], [ "Laroche", "Céline", "" ], [ "Cornet", "Jean-François", "" ], [ "Dussap", "Claude-Gilles", "" ] ]
The ability to cultivate the cyanobacterium Arhtrospira platensis in artificially lightened photobioreactors using high energetic efficiency (quasi-monochromatic) red LED was investigated. In order to reach the same maximal productivities as with the polychromatic lightening control conditions (red + blue, P/2e- = 1.275), the need to work with an optimal range of wavelength around 620 nm was first established on batch and continuous cultures. The long-term physiological and kinetic behavior was then verified in a continuous photobioreactor illuminated only with red (620 nm) LED, showing that the maximum productivities can be maintained over 30 residence times with only minor changes in the pigment content of the cells corresponding to a well-known adaptation mechanism of the photosystems, but without any effect on growth and stoichiometry. For both poly and monochromatic incident light inputs, a predictive spectral knowledge model was proposed and validated for the first time, allowing the calculation of the kinetics and stoichiometry observed in any photobioreactor cultivating A. platensis, or other cyanobacteria if the parameters were updated. It is shown that the photon flux (with a specified wavelength) must be used instead of light energy flux as a relevant control variable for the growth. The experimental and theoretical results obtained in this study demonstrate that it is possible to save the energy consumed by the lightening device of photobioreactors using red LED, the spectral range of which is defined according to the action spectrum of photosynthesis. This appears to be crucial information for applications in which the energy must be rationalized, as it is the case for life support systems in closed environments like a permanent spatial base or a submarine.
q-bio/0604014
Mariano Cadoni
M. Cadoni, R. De Leo, G. Gaeta
A composite model for DNA torsion dynamics
29 pages
Phys. Rev. E75 (2007), 021919
10.1103/PhysRevE.75.021919
null
q-bio.BM physics.bio-ph
null
DNA torsion dynamics is essential in the transcription process; a simple model for it, in reasonable agreement with experimental observations, has been proposed by Yakushevich (Y) and developed by several authors; in this, the DNA subunits made of a nucleoside and the attached nitrogen bases are described by a single degree of freedom. In this paper we propose and investigate, both analytically and numerically, a ``composite'' version of the Y model, in which the nucleoside and the base are described by separate degrees of freedom. The model proposed here contains as a particular case the Y model and shares with it many features and results, but represents an improvement from both the conceptual and the phenomenological point of view. It provides a more realistic description of DNA and possibly a justification for the use of models which consider the DNA chain as uniform. It shows that the existence of solitons is a generic feature of the underlying nonlinear dynamics and is to a large extent independent of the detailed modelling of DNA. The model we consider supports solitonic solutions, qualitatively and quantitatively very similar to the Y solitons, in a fully realistic range of all the physical parameters characterizing the DNA.
[ { "created": "Wed, 12 Apr 2006 13:42:47 GMT", "version": "v1" }, { "created": "Thu, 20 Apr 2006 17:33:18 GMT", "version": "v2" } ]
2009-11-13
[ [ "Cadoni", "M.", "" ], [ "De Leo", "R.", "" ], [ "Gaeta", "G.", "" ] ]
DNA torsion dynamics is essential in the transcription process; a simple model for it, in reasonable agreement with experimental observations, has been proposed by Yakushevich (Y) and developed by several authors; in this, the DNA subunits made of a nucleoside and the attached nitrogen bases are described by a single degree of freedom. In this paper we propose and investigate, both analytically and numerically, a ``composite'' version of the Y model, in which the nucleoside and the base are described by separate degrees of freedom. The model proposed here contains as a particular case the Y model and shares with it many features and results, but represents an improvement from both the conceptual and the phenomenological point of view. It provides a more realistic description of DNA and possibly a justification for the use of models which consider the DNA chain as uniform. It shows that the existence of solitons is a generic feature of the underlying nonlinear dynamics and is to a large extent independent of the detailed modelling of DNA. The model we consider supports solitonic solutions, qualitatively and quantitatively very similar to the Y solitons, in a fully realistic range of all the physical parameters characterizing the DNA.
2403.03231
Mikhail Dozmorov
Brydon P. G. Wall, My Nguyen, J. Chuck Harrell, Mikhail G. Dozmorov
Machine and deep learning methods for predicting 3D genome organization
Systematic review, one figure, three tables, 29 pages
null
null
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by/4.0/
Three-Dimensional (3D) chromatin interactions, such as enhancer-promoter interactions (EPIs), loops, Topologically Associating Domains (TADs), and A/B compartments play critical roles in a wide range of cellular processes by regulating gene expression. Recent development of chromatin conformation capture technologies has enabled genome-wide profiling of various 3D structures, even with single cells. However, current catalogs of 3D structures remain incomplete and unreliable due to differences in technology, tools, and low data resolution. Machine learning methods have emerged as an alternative to obtain missing 3D interactions and/or improve resolution. Such methods frequently use genome annotation data (ChIP-seq, DNAse-seq, etc.), DNA sequencing information (k-mers, Transcription Factor Binding Site (TFBS) motifs), and other genomic properties to learn the associations between genomic features and chromatin interactions. In this review, we discuss computational tools for predicting three types of 3D interactions (EPIs, chromatin interactions, TAD boundaries) and analyze their pros and cons. We also point out obstacles of computational prediction of 3D interactions and suggest future research directions.
[ { "created": "Mon, 4 Mar 2024 19:04:41 GMT", "version": "v1" } ]
2024-03-07
[ [ "Wall", "Brydon P. G.", "" ], [ "Nguyen", "My", "" ], [ "Harrell", "J. Chuck", "" ], [ "Dozmorov", "Mikhail G.", "" ] ]
Three-Dimensional (3D) chromatin interactions, such as enhancer-promoter interactions (EPIs), loops, Topologically Associating Domains (TADs), and A/B compartments play critical roles in a wide range of cellular processes by regulating gene expression. Recent development of chromatin conformation capture technologies has enabled genome-wide profiling of various 3D structures, even with single cells. However, current catalogs of 3D structures remain incomplete and unreliable due to differences in technology, tools, and low data resolution. Machine learning methods have emerged as an alternative to obtain missing 3D interactions and/or improve resolution. Such methods frequently use genome annotation data (ChIP-seq, DNAse-seq, etc.), DNA sequencing information (k-mers, Transcription Factor Binding Site (TFBS) motifs), and other genomic properties to learn the associations between genomic features and chromatin interactions. In this review, we discuss computational tools for predicting three types of 3D interactions (EPIs, chromatin interactions, TAD boundaries) and analyze their pros and cons. We also point out obstacles of computational prediction of 3D interactions and suggest future research directions.
1111.4785
Ivo Sbalzarini
Christian L. Muller, Rajesh Ramaswamy, Ivo F. Sbalzarini
Global parameter identification of stochastic reaction networks from single trajectories
Article in print as a book chapter in Springer's "Advances in Systems Biology"
null
null
null
q-bio.MN cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of inferring the unknown parameters of a stochastic biochemical network model from a single measured time-course of the concentration of some of the involved species. Such measurements are available, e.g., from live-cell fluorescence microscopy in image-based systems biology. In addition, fluctuation time-courses from, e.g., fluorescence correlation spectroscopy provide additional information about the system dynamics that can be used to more robustly infer parameters than when considering only mean concentrations. Estimating model parameters from a single experimental trajectory enables single-cell measurements and quantification of cell--cell variability. We propose a novel combination of an adaptive Monte Carlo sampler, called Gaussian Adaptation, and efficient exact stochastic simulation algorithms that allows parameter identification from single stochastic trajectories. We benchmark the proposed method on a linear and a non-linear reaction network at steady state and during transient phases. In addition, we demonstrate that the present method also provides an ellipsoidal volume estimate of the viable part of parameter space and is able to estimate the physical volume of the compartment in which the observed reactions take place.
[ { "created": "Mon, 21 Nov 2011 08:30:29 GMT", "version": "v1" } ]
2011-11-22
[ [ "Muller", "Christian L.", "" ], [ "Ramaswamy", "Rajesh", "" ], [ "Sbalzarini", "Ivo F.", "" ] ]
We consider the problem of inferring the unknown parameters of a stochastic biochemical network model from a single measured time-course of the concentration of some of the involved species. Such measurements are available, e.g., from live-cell fluorescence microscopy in image-based systems biology. In addition, fluctuation time-courses from, e.g., fluorescence correlation spectroscopy provide additional information about the system dynamics that can be used to more robustly infer parameters than when considering only mean concentrations. Estimating model parameters from a single experimental trajectory enables single-cell measurements and quantification of cell--cell variability. We propose a novel combination of an adaptive Monte Carlo sampler, called Gaussian Adaptation, and efficient exact stochastic simulation algorithms that allows parameter identification from single stochastic trajectories. We benchmark the proposed method on a linear and a non-linear reaction network at steady state and during transient phases. In addition, we demonstrate that the present method also provides an ellipsoidal volume estimate of the viable part of parameter space and is able to estimate the physical volume of the compartment in which the observed reactions take place.
2407.02440
Jose Fontanari
Jos\'e F. Fontanari and Mauro Santos
Solving the prisoner's dilemma trap in Hamilton's model of temporarily formed random groups
null
null
null
null
q-bio.PE nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Explaining the evolution of cooperation in the strong altruism scenario, where a cooperator does not benefit from her contribution to the public goods, is a challenging problem that requires positive assortment among cooperators (i.e., cooperators must tend to associate with other cooperators) or punishment of defectors. The need for these drastic measures stems from the analysis of a group selection model of temporarily formed random groups introduced by Hamilton nearly fifty years ago to describe the fate of altruistic behavior in a population. Challenging conventional wisdom, we show analytically here that strong altruism evolves in Hamilton's original model in the case of biparental sexual reproduction. Moreover, when the cost of cooperation is small and the amplified contribution shared by group members is large, cooperation is the only stable strategy in equilibrium. Thus, our results provide a solution to the `problem of origination' of strong altruism, i.e. how cooperation can take off from an initial low frequency of cooperators. We discuss a possible reassessment of cooperation in cases of viral co-infection, as cooperation may even be favored in situations where the prisoner's dilemma applies.
[ { "created": "Tue, 2 Jul 2024 17:15:34 GMT", "version": "v1" } ]
2024-07-03
[ [ "Fontanari", "José F.", "" ], [ "Santos", "Mauro", "" ] ]
Explaining the evolution of cooperation in the strong altruism scenario, where a cooperator does not benefit from her contribution to the public goods, is a challenging problem that requires positive assortment among cooperators (i.e., cooperators must tend to associate with other cooperators) or punishment of defectors. The need for these drastic measures stems from the analysis of a group selection model of temporarily formed random groups introduced by Hamilton nearly fifty years ago to describe the fate of altruistic behavior in a population. Challenging conventional wisdom, we show analytically here that strong altruism evolves in Hamilton's original model in the case of biparental sexual reproduction. Moreover, when the cost of cooperation is small and the amplified contribution shared by group members is large, cooperation is the only stable strategy in equilibrium. Thus, our results provide a solution to the `problem of origination' of strong altruism, i.e. how cooperation can take off from an initial low frequency of cooperators. We discuss a possible reassessment of cooperation in cases of viral co-infection, as cooperation may even be favored in situations where the prisoner's dilemma applies.
2101.05563
Giulia Laura Celora
Giulia L. Celora, Helen M. Byrne, Christos Zois, Panos G. Kevrekidis
Phenotypic variation modulates the growth dynamics and response to radiotherapy of solid tumours under normoxia and hypoxia
null
null
null
null
q-bio.CB nlin.PS physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In cancer, treatment failure and disease recurrence have been associated with small subpopulations of cancer cells with a stem-like phenotype. In this paper, we develop and investigate a phenotype-structured model of solid tumour growth in which cells are structured by a stemness level, which varies continuously between stem-like and terminally differentiated behaviours. Cell evolution is driven by proliferation and apoptosis, as well as advection and diffusion with respect to the stemness structure variable. We use the model to investigate how the environment, in particular oxygen levels, affects the tumour's population dynamics and composition, and its response to radiotherapy. We use a combination of numerical and analytical techniques to quantify how under physiological oxygen levels the cells evolve to a differentiated phenotype and under low oxygen level (i.e., hypoxia) they de-differentiate. Under normoxia, the proportion of cancer stem cells is typically negligible and the tumour may ultimately become extinct whereas under hypoxia cancer stem cells comprise a dominant proportion of the tumour volume, enhancing radio-resistance and favouring the tumour's long-term survival. We then investigate how such phenotypic heterogeneity impacts the tumour's response to treatment with radiotherapy under normoxia and hypoxia. Of particular interest is establishing how the presence of radio-resistant cancer stem cells can facilitate a tumour's regrowth following radiotherapy. We also use the model to show how radiation-induced changes in tumour oxygen levels can give rise to complex re-growth dynamics. For example, transient periods of hypoxia induced by damage to tumour blood vessels may rescue the cancer cell population from extinction and drive secondary regrowth. Further model extensions to account for spatial variation are also discussed briefly.
[ { "created": "Thu, 14 Jan 2021 12:10:53 GMT", "version": "v1" } ]
2021-01-15
[ [ "Celora", "Giulia L.", "" ], [ "Byrne", "Helen M.", "" ], [ "Zois", "Christos", "" ], [ "Kevrekidis", "Panos G.", "" ] ]
In cancer, treatment failure and disease recurrence have been associated with small subpopulations of cancer cells with a stem-like phenotype. In this paper, we develop and investigate a phenotype-structured model of solid tumour growth in which cells are structured by a stemness level, which varies continuously between stem-like and terminally differentiated behaviours. Cell evolution is driven by proliferation and apoptosis, as well as advection and diffusion with respect to the stemness structure variable. We use the model to investigate how the environment, in particular oxygen levels, affects the tumour's population dynamics and composition, and its response to radiotherapy. We use a combination of numerical and analytical techniques to quantify how under physiological oxygen levels the cells evolve to a differentiated phenotype and under low oxygen level (i.e., hypoxia) they de-differentiate. Under normoxia, the proportion of cancer stem cells is typically negligible and the tumour may ultimately become extinct whereas under hypoxia cancer stem cells comprise a dominant proportion of the tumour volume, enhancing radio-resistance and favouring the tumour's long-term survival. We then investigate how such phenotypic heterogeneity impacts the tumour's response to treatment with radiotherapy under normoxia and hypoxia. Of particular interest is establishing how the presence of radio-resistant cancer stem cells can facilitate a tumour's regrowth following radiotherapy. We also use the model to show how radiation-induced changes in tumour oxygen levels can give rise to complex re-growth dynamics. For example, transient periods of hypoxia induced by damage to tumour blood vessels may rescue the cancer cell population from extinction and drive secondary regrowth. Further model extensions to account for spatial variation are also discussed briefly.
0707.2300
Hiroshi Fujisaki
Hiroshi Fujisaki, John E. Straub
Vibrational energy relaxation (VER) of isotopically labeled amide I modes in cytochrome c: Theoretical investigation of VER rates and pathways
10 pages, 5 figures, to be published in J. Phys. Chem. B
null
null
null
q-bio.BM
null
Using a time-dependent perturbation theory, vibrational energy relaxation (VER) of isotopically labeled amide I modes in cytochrome c solvated with water is investigated. Contributions to the VER are decomposed into two contributions from the protein and water. The VER pathways are visualized using radial and angular excitation functions for resonant normal modes. Key differences of VER among different amide I modes are demonstrated, leading to a detailed picture of the spatial anisotropy of the VER. The results support the experimental observation that amide I modes in proteins relax with sub picosecond timescales, while the relaxation mechanism turns out to be sensitive to the environment of the amide I mode.
[ { "created": "Mon, 16 Jul 2007 11:04:29 GMT", "version": "v1" } ]
2007-07-17
[ [ "Fujisaki", "Hiroshi", "" ], [ "Straub", "John E.", "" ] ]
Using a time-dependent perturbation theory, vibrational energy relaxation (VER) of isotopically labeled amide I modes in cytochrome c solvated with water is investigated. Contributions to the VER are decomposed into two contributions from the protein and water. The VER pathways are visualized using radial and angular excitation functions for resonant normal modes. Key differences of VER among different amide I modes are demonstrated, leading to a detailed picture of the spatial anisotropy of the VER. The results support the experimental observation that amide I modes in proteins relax with sub picosecond timescales, while the relaxation mechanism turns out to be sensitive to the environment of the amide I mode.
0803.0465
Giulia Menconi
Giulia Menconi, Vieri Benci, Marcello Buiatti
Data compression and genomes: a two dimensional life domain map
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We define the complexity of DNA sequences as the information content per nucleotide, calculated by means of some Lempel-Ziv data compression algorithm. It is possible to use the statistics of the complexity values of the functional regions of different complete genomes to distinguish among genomes of different domains of life (Archaea, Bacteria and Eukarya). We shall focus on the distribution function of the complexity of noncoding regions. We show that the three domains may be plotted in separate regions within the two-dimensional space where the axes are the skewness coefficient and the curtosis coefficient of the aforementioned distribution. Preliminary results on 15 genomes are introduced.
[ { "created": "Tue, 4 Mar 2008 14:47:36 GMT", "version": "v1" } ]
2008-03-05
[ [ "Menconi", "Giulia", "" ], [ "Benci", "Vieri", "" ], [ "Buiatti", "Marcello", "" ] ]
We define the complexity of DNA sequences as the information content per nucleotide, calculated by means of some Lempel-Ziv data compression algorithm. It is possible to use the statistics of the complexity values of the functional regions of different complete genomes to distinguish among genomes of different domains of life (Archaea, Bacteria and Eukarya). We shall focus on the distribution function of the complexity of noncoding regions. We show that the three domains may be plotted in separate regions within the two-dimensional space where the axes are the skewness coefficient and the curtosis coefficient of the aforementioned distribution. Preliminary results on 15 genomes are introduced.
1810.05823
Delfim F. M. Torres
Ana P. Lemos-Paiao, Cristiana J. Silva, Delfim F. M. Torres
A cholera mathematical model with vaccination and the biggest outbreak of world's history
This is a preprint of a paper whose final and definite form is with 'AIMS Mathematics', available in open access from [http://www.aimspress.com/journal/Math]. Submitted 7-July-2018; Revised 14-Sept-2018; Accepted 12-Oct-2018. arXiv admin note: substantial text overlap with arXiv:1611.02195
AIMS Mathematics 3 (2018), no. 4, 448--463
10.3934/Math.2018.4.448
null
q-bio.PE math.CA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose and analyse a mathematical model for cholera considering vaccination. We show that the model is epidemiologically and mathematically well posed and prove the existence and uniqueness of disease-free and endemic equilibrium points. The basic reproduction number is determined and the local asymptotic stability of equilibria is studied. The biggest cholera outbreak of world's history began on 27th April 2017, in Yemen. Between 27th April 2017 and 15th April 2018 there were 2275 deaths due to this epidemic. A vaccination campaign began on 6th May 2018 and ended on 15th May 2018. We show that our model is able to describe well this outbreak. Moreover, we prove that the number of infected individuals would have been much lower provided the vaccination campaign had begun earlier.
[ { "created": "Sat, 13 Oct 2018 08:48:27 GMT", "version": "v1" } ]
2018-10-22
[ [ "Lemos-Paiao", "Ana P.", "" ], [ "Silva", "Cristiana J.", "" ], [ "Torres", "Delfim F. M.", "" ] ]
We propose and analyse a mathematical model for cholera considering vaccination. We show that the model is epidemiologically and mathematically well posed and prove the existence and uniqueness of disease-free and endemic equilibrium points. The basic reproduction number is determined and the local asymptotic stability of equilibria is studied. The biggest cholera outbreak of world's history began on 27th April 2017, in Yemen. Between 27th April 2017 and 15th April 2018 there were 2275 deaths due to this epidemic. A vaccination campaign began on 6th May 2018 and ended on 15th May 2018. We show that our model is able to describe well this outbreak. Moreover, we prove that the number of infected individuals would have been much lower provided the vaccination campaign had begun earlier.
1401.1129
Brent Pedersen
Brent S. Pedersen, Kenneth Eyring, Subhajyoti De, Ivana V. Yang and David A. Schwartz
Fast and accurate alignment of long bisulfite-seq reads
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Summary: Longer sequencing reads, with at least 200 bases per template are now common. While traditional aligners have adopted new strategies to improve the mapping of longer reads, aligners specific to bisulfite-sequencing were optimized when much shorter reads were the norm. We sought to perform the first comparison using longer reads to determine which aligners were most accurate and efficient and to evaluate a novel software tool, bwa-meth, built on a traditional mapper that supports insertions, deletions and clipped alignments. We gauge accuracy by comparing the number of on and off-target reads from a targeted sequencing project and by simulations. Availability and Implementation: The benchmarking scripts and the bwa-meth software are available at https://github/com/brentp/bwa-meth/ under the MIT License.
[ { "created": "Mon, 6 Jan 2014 16:08:04 GMT", "version": "v1" }, { "created": "Tue, 13 May 2014 15:02:10 GMT", "version": "v2" } ]
2014-05-14
[ [ "Pedersen", "Brent S.", "" ], [ "Eyring", "Kenneth", "" ], [ "De", "Subhajyoti", "" ], [ "Yang", "Ivana V.", "" ], [ "Schwartz", "David A.", "" ] ]
Summary: Longer sequencing reads, with at least 200 bases per template are now common. While traditional aligners have adopted new strategies to improve the mapping of longer reads, aligners specific to bisulfite-sequencing were optimized when much shorter reads were the norm. We sought to perform the first comparison using longer reads to determine which aligners were most accurate and efficient and to evaluate a novel software tool, bwa-meth, built on a traditional mapper that supports insertions, deletions and clipped alignments. We gauge accuracy by comparing the number of on and off-target reads from a targeted sequencing project and by simulations. Availability and Implementation: The benchmarking scripts and the bwa-meth software are available at https://github/com/brentp/bwa-meth/ under the MIT License.
2302.11438
Georgiy Karev
Georgiy Karev
On the scope of applicability of the models of Darwinian dynamics
39 pages, 11 figures
null
null
null
q-bio.PE
http://creativecommons.org/publicdomain/zero/1.0/
In their well-known textbook (Vincent & Brown, 2005), Vincent and Brown suggested an attractive approach for studying evolutionary dynamics of populations that are heterogeneous with respect to some strategy that affects the fitness of individuals in the population. The authors developed a theory, whose goal was to expand the applicability of mathematical models of population dynamics by including dynamics of an evolving heritable phenotype trait subject to natural selection. The authors studied both the case of evolution of individual traits and of mean traits in the population (or species) and the dynamics of total population size. The authors consider the developed approach as (more or less) universally applicable to models with any fitness function and any initial distribution of strategies, which is symmetric and has small variance. Here it was shown that the scope of the approach proposed by Vincent & Brown is unfortunately much more limited. I show that the approach gives exact results only if the population dynamics linearly depends on the trait; examples where the approach is incorrect are given.
[ { "created": "Wed, 22 Feb 2023 15:22:38 GMT", "version": "v1" } ]
2023-02-23
[ [ "Karev", "Georgiy", "" ] ]
In their well-known textbook (Vincent & Brown, 2005), Vincent and Brown suggested an attractive approach for studying evolutionary dynamics of populations that are heterogeneous with respect to some strategy that affects the fitness of individuals in the population. The authors developed a theory, whose goal was to expand the applicability of mathematical models of population dynamics by including dynamics of an evolving heritable phenotype trait subject to natural selection. The authors studied both the case of evolution of individual traits and of mean traits in the population (or species) and the dynamics of total population size. The authors consider the developed approach as (more or less) universally applicable to models with any fitness function and any initial distribution of strategies, which is symmetric and has small variance. Here it was shown that the scope of the approach proposed by Vincent & Brown is unfortunately much more limited. I show that the approach gives exact results only if the population dynamics linearly depends on the trait; examples where the approach is incorrect are given.
0808.1283
Razvan Radulescu M.D.
Razvan Tudor Radulescu
Peptide strings clues to the genesis and treatment of rheumatoid arthritis: rebuilding self-protective immunity amid fungal ruins
15 pages, 3 figures, 3 tables
null
null
null
q-bio.SC q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A recent application of the peptide strings concept has yielded novel perceptions on cell growth regulation, for instance that of oncoprotein metastasis. Here, this interdisciplinary approach at the boundary between physics and biology has been applied to gain a more profound insight into rheumatoid arthritis. As a result of the present investigation, this disease could be viewed as due to a metabolic dysregulation/syndrome-associated breakdown in the immunoglobulin A-based surveillance of the potentially pathogenic fungus Candida albicans that subsequently engenders a widespread self-destruction through cross-reactive auto-epitopes, ultimately amounting to the systemic predominance of a pro-inflammatory peptide string. Its therapeutic counterpart equally proposed in this report might serve as a model for future strategies against autoimmunity.
[ { "created": "Sun, 10 Aug 2008 23:03:29 GMT", "version": "v1" } ]
2008-08-12
[ [ "Radulescu", "Razvan Tudor", "" ] ]
A recent application of the peptide strings concept has yielded novel perceptions on cell growth regulation, for instance that of oncoprotein metastasis. Here, this interdisciplinary approach at the boundary between physics and biology has been applied to gain a more profound insight into rheumatoid arthritis. As a result of the present investigation, this disease could be viewed as due to a metabolic dysregulation/syndrome-associated breakdown in the immunoglobulin A-based surveillance of the potentially pathogenic fungus Candida albicans that subsequently engenders a widespread self-destruction through cross-reactive auto-epitopes, ultimately amounting to the systemic predominance of a pro-inflammatory peptide string. Its therapeutic counterpart equally proposed in this report might serve as a model for future strategies against autoimmunity.
1709.03630
Alex McAvoy
Alex McAvoy, Nicolas Fraiman, Christoph Hauert, John Wakeley, Martin A. Nowak
Public goods games in populations with fluctuating size
21 pages; final version
null
10.1016/j.tpb.2018.01.004
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many mathematical frameworks of evolutionary game dynamics assume that the total population size is constant and that selection affects only the relative frequency of strategies. Here, we consider evolutionary game dynamics in an extended Wright-Fisher process with variable population size. In such a scenario, it is possible that the entire population becomes extinct. Survival of the population may depend on which strategy prevails in the game dynamics. Studying cooperative dilemmas, it is a natural feature of such a model that cooperators enable survival, while defectors drive extinction. Although defectors are favored for any mixed population, random drift could lead to their elimination and the resulting pure-cooperator population could survive. On the other hand, if the defectors remain, then the population will quickly go extinct because the frequency of cooperators steadily declines and defectors alone cannot survive. In a mutation-selection model, we find that (i) a steady supply of cooperators can enable long-term population survival, provided selection is sufficiently strong, and (ii) selection can increase the abundance of cooperators but reduce their relative frequency. Thus, evolutionary game dynamics in populations with variable size generate a multifaceted notion of what constitutes a trait's long-term success.
[ { "created": "Tue, 12 Sep 2017 00:10:01 GMT", "version": "v1" }, { "created": "Thu, 15 Feb 2018 04:37:42 GMT", "version": "v2" } ]
2018-02-16
[ [ "McAvoy", "Alex", "" ], [ "Fraiman", "Nicolas", "" ], [ "Hauert", "Christoph", "" ], [ "Wakeley", "John", "" ], [ "Nowak", "Martin A.", "" ] ]
Many mathematical frameworks of evolutionary game dynamics assume that the total population size is constant and that selection affects only the relative frequency of strategies. Here, we consider evolutionary game dynamics in an extended Wright-Fisher process with variable population size. In such a scenario, it is possible that the entire population becomes extinct. Survival of the population may depend on which strategy prevails in the game dynamics. Studying cooperative dilemmas, it is a natural feature of such a model that cooperators enable survival, while defectors drive extinction. Although defectors are favored for any mixed population, random drift could lead to their elimination and the resulting pure-cooperator population could survive. On the other hand, if the defectors remain, then the population will quickly go extinct because the frequency of cooperators steadily declines and defectors alone cannot survive. In a mutation-selection model, we find that (i) a steady supply of cooperators can enable long-term population survival, provided selection is sufficiently strong, and (ii) selection can increase the abundance of cooperators but reduce their relative frequency. Thus, evolutionary game dynamics in populations with variable size generate a multifaceted notion of what constitutes a trait's long-term success.
1306.5355
Davit Potoyan
D. A. Potoyan and P. G. Wolynes
On the dephasing of genetic oscillations
null
null
10.1073/pnas.1323433111
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The digital nature of genes combined with the associated low copy numbers of proteins regulating them is a significant source of stochasticity, which affects the phase of biochemical oscillations. We provide a theoretical framework for understanding the dephasing evolution of genetic oscillations by combining the phenomenological stochastic limit cycle dynamics and the discrete Markov state models that describe the genetic oscillations. Through simulations of the realistic model of the NF\kappa B/I\kappa B network we illustrate the dephasing phenomena which are important for reconciling single cell and population based experiments on this system.
[ { "created": "Sat, 22 Jun 2013 21:26:38 GMT", "version": "v1" } ]
2014-06-03
[ [ "Potoyan", "D. A.", "" ], [ "Wolynes", "P. G.", "" ] ]
The digital nature of genes combined with the associated low copy numbers of proteins regulating them is a significant source of stochasticity, which affects the phase of biochemical oscillations. We provide a theoretical framework for understanding the dephasing evolution of genetic oscillations by combining the phenomenological stochastic limit cycle dynamics and the discrete Markov state models that describe the genetic oscillations. Through simulations of the realistic model of the NF\kappa B/I\kappa B network we illustrate the dephasing phenomena which are important for reconciling single cell and population based experiments on this system.
2206.06035
Ulderico Fugacci
Luca Gagliardi, Andrea Raffo, Ulderico Fugacci, Silvia Biasotti, Walter Rocchia, Hao Huang, Boulbaba Ben Amor, Yi Fang, Yuanyuan Zhang, Xiao Wang, Charles Christoffer, Daisuke Kihara, Apostolos Axenopoulos, Stelios Mylonas, Petros Daras
SHREC 2022: Protein-ligand binding site recognition
null
Computers & Graphics 107 (2022) 20-31
10.1016/j.cag.2022.07.005
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper presents the methods that have participated in the SHREC 2022 contest on protein-ligand binding site recognition. The prediction of protein-ligand binding regions is an active research domain in computational biophysics and structural biology and plays a relevant role for molecular docking and drug design. The goal of the contest is to assess the effectiveness of computational methods in recognizing ligand binding sites in a protein based on its geometrical structure. Performances of the segmentation algorithms are analyzed according to two evaluation scores describing the capacity of a putative pocket to contact a ligand and to pinpoint the correct binding region. Despite some methods perform remarkably, we show that simple non-machine-learning approaches remain very competitive against data-driven algorithms. In general, the task of pocket detection remains a challenging learning problem which suffers of intrinsic difficulties due to the lack of negative examples (data imbalance problem).
[ { "created": "Mon, 13 Jun 2022 10:43:32 GMT", "version": "v1" }, { "created": "Tue, 14 Jun 2022 14:48:35 GMT", "version": "v2" }, { "created": "Sat, 2 Jul 2022 08:06:13 GMT", "version": "v3" }, { "created": "Wed, 24 Aug 2022 13:06:46 GMT", "version": "v4" } ]
2023-08-10
[ [ "Gagliardi", "Luca", "" ], [ "Raffo", "Andrea", "" ], [ "Fugacci", "Ulderico", "" ], [ "Biasotti", "Silvia", "" ], [ "Rocchia", "Walter", "" ], [ "Huang", "Hao", "" ], [ "Amor", "Boulbaba Ben", "" ], [ "Fang", "Yi", "" ], [ "Zhang", "Yuanyuan", "" ], [ "Wang", "Xiao", "" ], [ "Christoffer", "Charles", "" ], [ "Kihara", "Daisuke", "" ], [ "Axenopoulos", "Apostolos", "" ], [ "Mylonas", "Stelios", "" ], [ "Daras", "Petros", "" ] ]
This paper presents the methods that have participated in the SHREC 2022 contest on protein-ligand binding site recognition. The prediction of protein-ligand binding regions is an active research domain in computational biophysics and structural biology and plays a relevant role for molecular docking and drug design. The goal of the contest is to assess the effectiveness of computational methods in recognizing ligand binding sites in a protein based on its geometrical structure. Performances of the segmentation algorithms are analyzed according to two evaluation scores describing the capacity of a putative pocket to contact a ligand and to pinpoint the correct binding region. Despite some methods perform remarkably, we show that simple non-machine-learning approaches remain very competitive against data-driven algorithms. In general, the task of pocket detection remains a challenging learning problem which suffers of intrinsic difficulties due to the lack of negative examples (data imbalance problem).
1608.00535
David Murrugarra
David Murrugarra, Jacob Miller, and Alex Mueller
Estimating Propensity Parameters using Google PageRank and Genetic Algorithms
20 pages, 4 figures, 2 tables
Frontiers in Neuroscience, 10:513, 2016
10.3389/fnins.2016.00513
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stochastic Boolean networks, or more generally, stochastic discrete networks, are an important class of computational models for molecular interaction networks. The stochasticity stems from the updating schedule. Standard updating schedules include the synchronous update, where all the nodes are updated at the same time, and the asynchronous update where a random node is updated at each time step. The former produces a deterministic dynamics while the latter a stochastic dynamics. A more general stochastic setting considers propensity parameters for updating each node. Stochastic Discrete Dynamical Systems (SDDS) is a modeling framework that considers two propensity parameters for updating each node and uses one when the update has a positive impact on the variable, that is, when the update causes the variable to increase its value, and uses the other when the update has a negative impact, that is, when the update causes it to decrease its value. This framework offers additional features for simulations but also adds a complexity in parameter estimation of the propensities. This paper presents a method for estimating the propensity parameters for SDDS. The method is based on adding noise to the system using the Google PageRank approach to make the system ergodic and thus guaranteeing the existence of a stationary distribution. Then with the use of a genetic algorithm, the propensity parameters are estimated. Approximation techniques that make the search algorithms efficient are also presented and Matlab/Octave code to test the algorithms are available at http://www.ms.uky.edu/~dmu228/GeneticAlg/Code.html.
[ { "created": "Mon, 1 Aug 2016 19:26:13 GMT", "version": "v1" }, { "created": "Mon, 3 Oct 2016 15:36:52 GMT", "version": "v2" }, { "created": "Mon, 17 Oct 2016 17:22:35 GMT", "version": "v3" } ]
2024-07-09
[ [ "Murrugarra", "David", "" ], [ "Miller", "Jacob", "" ], [ "Mueller", "Alex", "" ] ]
Stochastic Boolean networks, or more generally, stochastic discrete networks, are an important class of computational models for molecular interaction networks. The stochasticity stems from the updating schedule. Standard updating schedules include the synchronous update, where all the nodes are updated at the same time, and the asynchronous update where a random node is updated at each time step. The former produces a deterministic dynamics while the latter a stochastic dynamics. A more general stochastic setting considers propensity parameters for updating each node. Stochastic Discrete Dynamical Systems (SDDS) is a modeling framework that considers two propensity parameters for updating each node and uses one when the update has a positive impact on the variable, that is, when the update causes the variable to increase its value, and uses the other when the update has a negative impact, that is, when the update causes it to decrease its value. This framework offers additional features for simulations but also adds a complexity in parameter estimation of the propensities. This paper presents a method for estimating the propensity parameters for SDDS. The method is based on adding noise to the system using the Google PageRank approach to make the system ergodic and thus guaranteeing the existence of a stationary distribution. Then with the use of a genetic algorithm, the propensity parameters are estimated. Approximation techniques that make the search algorithms efficient are also presented and Matlab/Octave code to test the algorithms are available at http://www.ms.uky.edu/~dmu228/GeneticAlg/Code.html.
2009.01574
Attila Szolnoki
Marcell Blahota, Istvan Blahota, and Attila Szolnoki
Equal partners do better in defensive alliances
7 two-column pages, 6 figures, accepted for publication in EPL
EPL 131 (2020) 58002
10.1209/0295-5075/131/58002
null
q-bio.PE cond-mat.stat-mech cs.GT nlin.PS physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cyclic dominance offers not just a way to maintain biodiversity, but also serves as a sort of defensive alliance against an external invader. Interestingly, a new level of competition can be observed when two cyclic loops are present. Here the inner invasion speed plays a decisive role on the evolutionary outcome, because faster invasion rate provides an evolutionary advantage to an alliance. In this Letter we demonstrate that the heterogeneity of inner invasion rates makes an alliance vulnerable against a loop where group members are equal. Quite surprisingly, a loop where invasion rates are uniform can still dominate an alliance formed by heteregeneous rates even if the average speed of invasion is significantly higher in the latter group. At a specific range of parameter space, when intergroup invasion or the average inner invasion is moderate, the heterogeneous alliance with higher internal invasion speed may prevail, or the system terminates onto a novel 4- or 5-species solution.
[ { "created": "Thu, 3 Sep 2020 10:41:06 GMT", "version": "v1" } ]
2020-09-21
[ [ "Blahota", "Marcell", "" ], [ "Blahota", "Istvan", "" ], [ "Szolnoki", "Attila", "" ] ]
Cyclic dominance offers not just a way to maintain biodiversity, but also serves as a sort of defensive alliance against an external invader. Interestingly, a new level of competition can be observed when two cyclic loops are present. Here the inner invasion speed plays a decisive role on the evolutionary outcome, because faster invasion rate provides an evolutionary advantage to an alliance. In this Letter we demonstrate that the heterogeneity of inner invasion rates makes an alliance vulnerable against a loop where group members are equal. Quite surprisingly, a loop where invasion rates are uniform can still dominate an alliance formed by heteregeneous rates even if the average speed of invasion is significantly higher in the latter group. At a specific range of parameter space, when intergroup invasion or the average inner invasion is moderate, the heterogeneous alliance with higher internal invasion speed may prevail, or the system terminates onto a novel 4- or 5-species solution.
2301.04471
Umberto Michelucci
Francesca Venturini, Michela Sperti, Umberto Michelucci, Arnaud Gucciardi, Vanessa M. Martos, Marco A. Deriu
Dataset of Fluorescence Spectra and Chemical Parameters of Olive Oils
null
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
This dataset encompasses fluorescence spectra and chemical parameters of 24 olive oil samples from the 2019-2020 harvest provided by the producer Conde de Benalua, Granada, Spain. The oils are characterized by different qualities: 10 extra virgin olive oil (EVOO), 8 virgin olive oil (VOO), and 6 lampante olive oil (LOO) samples. For each sample, the dataset includes fluorescence spectra obtained with two excitation wavelengths, oil quality, and five chemical parameters necessary for the quality assessment of olive oil. The fluorescence spectra were obtained by exciting the samples at 365 nm and 395 nm under identical conditions. The dataset includes the values of the following chemical parameters for each olive oil sample: acidity, peroxide value, K270, K232, ethyl esters, and the quality of the samples (EVOO, VOO, or LOO). The dataset offers a unique possibility for researchers in food technology to develop machine learning models based on fluorescence data for the quality assessment of olive oil due to the availability of both spectroscopic and chemical data. The dataset can be used, for example, to predict one or multiple chemical parameters or to classify samples based on their quality from fluorescence spectra.
[ { "created": "Tue, 10 Jan 2023 10:17:03 GMT", "version": "v1" } ]
2023-01-12
[ [ "Venturini", "Francesca", "" ], [ "Sperti", "Michela", "" ], [ "Michelucci", "Umberto", "" ], [ "Gucciardi", "Arnaud", "" ], [ "Martos", "Vanessa M.", "" ], [ "Deriu", "Marco A.", "" ] ]
This dataset encompasses fluorescence spectra and chemical parameters of 24 olive oil samples from the 2019-2020 harvest provided by the producer Conde de Benalua, Granada, Spain. The oils are characterized by different qualities: 10 extra virgin olive oil (EVOO), 8 virgin olive oil (VOO), and 6 lampante olive oil (LOO) samples. For each sample, the dataset includes fluorescence spectra obtained with two excitation wavelengths, oil quality, and five chemical parameters necessary for the quality assessment of olive oil. The fluorescence spectra were obtained by exciting the samples at 365 nm and 395 nm under identical conditions. The dataset includes the values of the following chemical parameters for each olive oil sample: acidity, peroxide value, K270, K232, ethyl esters, and the quality of the samples (EVOO, VOO, or LOO). The dataset offers a unique possibility for researchers in food technology to develop machine learning models based on fluorescence data for the quality assessment of olive oil due to the availability of both spectroscopic and chemical data. The dataset can be used, for example, to predict one or multiple chemical parameters or to classify samples based on their quality from fluorescence spectra.
2208.11626
Martin Weigt
Carlos A. Gandarilla-Perez, Sergio Pinilla, Anne-Florence Bitbol, Martin Weigt
Combining phylogeny and coevolution improves the inference of interaction partners among paralogous proteins
19 pages
null
10.1371/journal.pcbi.1011010
null
q-bio.BM cond-mat.stat-mech q-bio.MN
http://creativecommons.org/licenses/by-nc-sa/4.0/
Predicting protein-protein interactions from sequences is an important goal of computational biology. Various sources of information can be used to this end. Starting from the sequences of two interacting protein families, one can use phylogeny or residue coevolution to infer which paralogs are specific interaction partners within each species. We show that these two signals can be combined to improve the performance of the inference of interaction partners among paralogs. For this, we first align the sequence-similarity graphs of the two families through simulated annealing, yielding a robust partial pairing. We next use this partial pairing to seed a coevolution-based iterative pairing algorithm. This combined method improves performance over either separate method. The improvement obtained is striking in the difficult cases where the average number of paralogs per species is large or where the total number of sequences is modest.
[ { "created": "Wed, 24 Aug 2022 15:51:28 GMT", "version": "v1" } ]
2023-04-26
[ [ "Gandarilla-Perez", "Carlos A.", "" ], [ "Pinilla", "Sergio", "" ], [ "Bitbol", "Anne-Florence", "" ], [ "Weigt", "Martin", "" ] ]
Predicting protein-protein interactions from sequences is an important goal of computational biology. Various sources of information can be used to this end. Starting from the sequences of two interacting protein families, one can use phylogeny or residue coevolution to infer which paralogs are specific interaction partners within each species. We show that these two signals can be combined to improve the performance of the inference of interaction partners among paralogs. For this, we first align the sequence-similarity graphs of the two families through simulated annealing, yielding a robust partial pairing. We next use this partial pairing to seed a coevolution-based iterative pairing algorithm. This combined method improves performance over either separate method. The improvement obtained is striking in the difficult cases where the average number of paralogs per species is large or where the total number of sequences is modest.
1512.00037
Weiyu Huang
Weiyu Huang, Leah Goldsberry, Nicholas F. Wymbs, Scott T. Grafton, Danielle S. Bassett and Alejandro Ribeiro
Graph Frequency Analysis of Brain Signals
null
null
10.1109/JSTSP.2016.2600859
null
q-bio.NC cs.CE cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents methods to analyze functional brain networks and signals from graph spectral perspectives. The notion of frequency and filters traditionally defined for signals supported on regular domains such as discrete time and image grids has been recently generalized to irregular graph domains, and defines brain graph frequencies associated with different levels of spatial smoothness across the brain regions. Brain network frequency also enables the decomposition of brain signals into pieces corresponding to smooth or rapid variations. We relate graph frequency with principal component analysis when the networks of interest denote functional connectivity. The methods are utilized to analyze brain networks and signals as subjects master a simple motor skill. We observe that brain signals corresponding to different graph frequencies exhibit different levels of adaptability throughout learning. Further, we notice a strong association between graph spectral properties of brain networks and the level of exposure to tasks performed, and recognize the most contributing and important frequency signatures at different task familiarity.
[ { "created": "Mon, 2 Nov 2015 19:25:39 GMT", "version": "v1" }, { "created": "Tue, 3 May 2016 16:00:32 GMT", "version": "v2" } ]
2016-11-03
[ [ "Huang", "Weiyu", "" ], [ "Goldsberry", "Leah", "" ], [ "Wymbs", "Nicholas F.", "" ], [ "Grafton", "Scott T.", "" ], [ "Bassett", "Danielle S.", "" ], [ "Ribeiro", "Alejandro", "" ] ]
This paper presents methods to analyze functional brain networks and signals from graph spectral perspectives. The notion of frequency and filters traditionally defined for signals supported on regular domains such as discrete time and image grids has been recently generalized to irregular graph domains, and defines brain graph frequencies associated with different levels of spatial smoothness across the brain regions. Brain network frequency also enables the decomposition of brain signals into pieces corresponding to smooth or rapid variations. We relate graph frequency with principal component analysis when the networks of interest denote functional connectivity. The methods are utilized to analyze brain networks and signals as subjects master a simple motor skill. We observe that brain signals corresponding to different graph frequencies exhibit different levels of adaptability throughout learning. Further, we notice a strong association between graph spectral properties of brain networks and the level of exposure to tasks performed, and recognize the most contributing and important frequency signatures at different task familiarity.